A Refined Extended Model of Model of Intention Selection (MIS)

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The Model of Intention Selection (MIS) offers a computational account of how intentions are selected from long-term memory. Recent empirical validations, however, have revealed systematic deviations from its core predictions, indicating critical limitations in its theoretical formulation. Specifically, the model assumes linear reward scaling, constant processing rates, trial independence, non-interactive components, and a fixed capacity allocation strategy, all of which are inconsistent with observed data. We propose five theoretically-grounded refinements to address these shortcomings. These extensions incorporate: (1) psychophysical scaling of reward based on Stevens' Power Law; (2) time-dependent processing rates to capture dynamic interactions with stimulus onset asynchrony (SOA); (3) context-dependent matching to account for sequential priming effects; (4) a component integration function to model synergistic interactions within an intention; and (5) adaptive capacity allocation guided by principles of resource-rationality. By integrating these extensions, the revised model not only accounts for previously unexplained empirical patterns in recent data but also enhances theoretical coherence with established principles in cognitive science, generating a new set of precise, testable predictions. This work exemplifies a data-driven approach to theoretical refinement in computational cognitive modeling.

Article activity feed