What Makes Co-Speech Gestures Memorable?

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Co-speech gestures, which are spontaneous hand movements speakers produce when they talk, strongly enhance listeners’ cognitive processing of the spoken messages across development. Yet, the mechanisms by which gesture supports memory processing, particularly in interaction with speech, remain poorly understood, limiting the intentional use of gesture to improve learning outside of lab settings. Here, we ask whether certain gestures are consistently better remembered across people, independent of prior experience with the stimuli (the memorability effect), and if so, what semantic meaning and visual form features explain this effect. We created 360 10-second audiovisual stimuli by video recording 20 actors producing natural, unscripted speech and gestures while pretending to explain Piagetian liquid conservation to a child. Online participants completed a study–test memory task with video-only, audio-only, and audiovisual versions of the stimuli. Participants showed memory consistencies in all three conditions, and memorability for gesture+speech (audiovisual stimuli) was predicted by the memorability of both its gesture and speech components. We then quantified features of the gestural stimuli using 3 methods: trained coders, automatic computational analysis, and online crowdsourcing. Gestures are more memorable when they convey more information, and are most memorable when originally produced with memorable speech. Together, these findings reveal nuanced interactions between gesture and speech during memory processing, providing new insights into the memory mechanism underlying gesture’s benefits on learning and communications.

Article activity feed