Dissecting the Hierarchy of Gesture Comprehension: Evidence from a Multilevel Priming Paradigm and Drift-Diffusion Model

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Gestures convey information at multiple representational layers from raw kinematics to rich social meaning, yet most extant research treats them as a single, binary or categorical cue. Here, we used a novel multilevel priming task combined with drift-diffusion modeling (DDM) to test whether gesture comprehension unfolds hierarchically and how prime-gesture congruency is resolved at different levels. Here, participants judged the congruency of static thumbs-up/down targets preceded by one of four prime types that successively increase in abstraction: contour outlines, directional arrows, valence words, and social-scene photographs. Reaction times and accuracy were analysed with repeated-measures ANOVAs; latent decision-making parameters were estimated with DDM. Behaviorally, response times lengthened step-wise from contour to social primes. DDM showed a parallel decline in drift rate indicating progressively slower evidence accumulation, while decision boundary remained stable. Congruency effects were level-specific: incongruent contours decreased drift rate, whereas incongruent word and social primes increased it, suggesting qualitatively different conflict-resolution mechanisms. Our results thus provide the first behavioral-computational evidence that gesture processing is genuinely hierarchical and that conflict monitoring adapts to the representational level at which a mismatch occurs, bridging evolutionary, psycholinguistic, and cognitive-control accounts of multimodal communication. Methodologically, pairing a multilevel gesture paradigm with hierarchical DDM offers a scalable tool for investigating multimodal interaction in both basic research and applied settings.

Article activity feed