Rate estimation revisited

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Two views of Pavlovian conditioning have dominated theoretical discourse. The classical associative view holds that associations are learned based on temporal contiguity between stimuli, and conditioned responses directly reflect associative strength. The representational view, exemplified by Rate Estimation Theory (Gallistel & Gibbon, 2000), holds that animals learn the structure of the stimulus distribution, from which a measure of contingency between stimuli is derived and used to generate conditioned responses. Unlike contiguity, contingency is a relative measure, comparing the rate of reinforcement in the presence of a stimulus to the background rate. This turns out to be crucial for explaining the effects of manipulating the background rate while holding the stimulus-conditional rate constant (i.e., changing contingency without changing contiguity). It has also been argued that contiguity theories face irremediable conceptual difficulties stemming from the coercion of continuous time into discrete bins. This paper makes two contributions to the debate. First, it shows that Rate Estimation Theory faces its own computational and conceptual problems. Second, it shows how to fix these problems while retaining the core of the theory. Surprisingly, this leads to the insight that rates can be estimated using an algorithm closely resembling a classical associative theory (the Rescorla-Wagner model). The key difference lies in the response rule rather than in the learning rule. This suggests that the gulf between associative and representational theories is smaller than previously thought.

Article activity feed