Set-Size Scaling Effects in Visual Working Memory Models Identify the Decoding Rule Used in Retrieval
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
We examined how recent visual working memory (VWM) models for continuous-outcome tasks quantify memory capacity limitations through model parameters, and whether these quantifications conform to the inverse square-root scaling predictions derived from the sample-size model. We simulated response errors under varying memory load using a Poisson neural population coding framework, in which load was manipulated by decreasing neural population size to represent increases in memory set size. We applied multiple models that differed in noise distributions and decision rules. For each model, we compared alternative parameterization schemes, including variants constrained by the sample-size model, that allowed different parameters to vary with load. Across models, the parameter expressing capacity-related change depended systematically on the decoding strategy used in the decision rule. When response errors were generated under a maximum likelihood (ML) decoding, the dispersion parameter σ scaled with decreasing population size. In contrast, under maximum-output, or MAX, decoding, the signal amplitude parameter γ scaled with decreasing population size. These results demonstrate that the apparent locus of capacity limitation in continuous VWM depends systematically on how population-level neural activity is decoded in memory retrieval and provide evidence that the form of decoding used in making decisions is identifiable from the parameter invariance in the VWM model.