Enhancing Visuospatial Mapping in Relational Category Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Visual relational concepts—defined by patterns of relationships between entities—are thought to require structured, compositional representations with explicit role information about each entity. Analogical mapping over compositional representations is a key strategy for acquiring such concepts, but in complex situations with many entities and relations, this process can be cognitively demanding. As a result, learning may occur over feature-based representations, where exemplars are encoded as unstructured lists of entities and relations, losing crucial role information and limiting generalizability. To reduce the cognitive load of analogical mapping, we explored the effectiveness of two visuospatial training aids: (1) spatially organizing exemplars by category to facilitate comparisons, and (2) using color coding to highlight the roles of entities within each exemplar. Across three experiments, we examined whether these visuospatial aids improve learning rates on the Synthetic Visual Reasoning Test (SVRT), a collection of 23 problems that require learning relational concepts. Our results showed that displays of previous instances that spatially sorted them into positive and negative sets led to faster concept learning. Learning was faster overall when problems were ordered easy-to-hard rather than randomly, but sorted displays were more effective in either case. Color coding proved beneficial only when colors unambiguously and non-redundantly linked entities that played corresponding roles; when color coding did not support a clear mapping, it interfered with learning. These findings suggest that rapid learning of relational concepts can be facilitated by display characteristics that support analogical mapping by comparisons.

Article activity feed