Modality-Specific Abstraction in Event Perception
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Dynamic events can be perceived at different levels of granularity, with coarser contexts (e.g., “morning routine”) requiring more conceptual integration than finer contexts (e.g., “having breakfast”). We investigated whether this increased integration yields more abstract mental representations (i.e., fewer perceptual details) across three experiments. In these experiments, participants encoded events (presented via text or video) in coarse- or fine-grained contexts and then matched these events to corresponding stimuli. Our results indicated that coarse context consistently took longer to process, which is consistent with higher integration demands. Participants were also faster when encoding and test modalities matched (suggesting modality-specific processing) and when the test modality was video (reflecting slower reading times in text). Crucially, we found no interaction between context grain and encoding or test modalities. Contrary to the expectation that coarser contexts would produce more amodal mental representations, participants showed no coarse-grain cross-modal advantage in recognizing events. We propose that the contents of everyday events might be abstracted similarly within mental representations for both fine and coarse contexts. Furthermore, our results provide evidence for amodal representations when the environment is unpredictable. That is, participants transformed their mental representations into both verbally and visually compatible formats, regardless of the original encoding modality.