Integrating Language Model Embeddings into the ACT-R Cognitive Modeling Framework

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We use the ACT-R cognitive modeling framework to simulate the associative priming effect in lexical decision tasks, where prior exposure to a prime word influences response times to a target word, depending on the degree of semantic association between them. Currently, ACT-R relies on manually coded similarity scores, which are labor-intensive and prone to inaccuracies, especially for large vocabularies. In this study, we replace these hand-coded relationships with language model embedding vectors, including Word2Vec, along with state-of-the-art large language models such as BERT and Gemma, to enhance both the accuracy and scalability of the model. Our results demonstrate a strong correlation between ACT-R predictions and observed human response times from a dataset. We see this as a positive proof of concept for how embedding vectors can be integrated within algorithmic-level models of language processing.

Article activity feed