The social transfer function: how dynamic predictions of facial consequences drive judgements of social contingency

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human social interactions abound with time-aligned multimodal information such as nods and eyeblinks, yet little is known about how these cues contribute to the detection of social contingency, i.e.- how exactly does one know that two people are interacting with one another? We developed a novel experimental paradigm in which observers discriminate between video recordings of genuine and fake dyadic interactions based solely on the interplay between the speaker's speech and/or facial expressions and the listener's facial backchanneling cues. Using a combination of computational modeling using temporal response functions (TRFs) and behavioral data from two independent experiments (N=206), we show that observers perform above chance when recognizing genuine social interactions; that, to do so, they causally rely on the link between the speaker's speech and the listener's mouth and eye information; and that this inference is driven by time-aligned, dynamic predictions rather than average quantities of movement. In both experiments, judgements of social contingency are well-predicted by a computational model that evaluates the agreement of observed data with the output of a pre-learned 'social transfer function' that dynamically predicts the facial consequences of a given speech signal. These results provide mechanistic insights into the features that contribute to perception of social contingency, and could potentially be used to identify markers of contingency in people with disorders of consciousness, autism and social anxiety.

Article activity feed