文摘
This manuscript describes how learning curves can be used to provide a strong test for computational models of cognitive processes. As an example, we show how this method can be used to evaluate the Exemplar-Based Random-Walk model of categorization (EBRW; ). EBRW is an extension of the Generalized Context Model (GCM; ). It predicts that the mean response times (RTs) follow a power function. It can be shown analytically, however, that the learning rate (i.e., the curvature) predicted by the model can only be equal to 1, a value rarely observed in empirical data analyses. We also explored an extended version of EBRW including background noise elements () and identified conditions under which this model can predict curvatures different from 1. The limitation of these models to predict a wide variety of curvatures as observed in human data can be resolved by a simple extension to EBRW in which the original exponential distribution of retrieval times is replaced by a Weibull distribution. Additional predictions regarding learning curves are discussed.