Cognitive Alignment Between Humans and LLMs Across Multimodal Domains

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) show remarkable text abilities, prompting investigations into their alignment with human cognition. Using the Brain-Based Componential Semantic Representation (BBSR), a neurobiologically grounded semantic framework, we evaluate nine LLMs, including Qwen2, Llama-3, Llama-3.1, and GPT series. We examine their multimodal cognitive boundaries, representational similarity, cross-modality consistency, abstract/concrete divergences, psycholinguistic factors, and stability across repeated responses. Larger models align more closely with human cognition, particularly for concrete concepts and early-acquired words. Still, discrepancies persist, especially for abstract concepts, spatial cognition, embodied experiences (e.g., olfaction, gustation), and causal reasoning. These findings reveal limitations in LLM cognitive architectures, emphasizing models embodying human cognition.

Article activity feed