Vigilance towards AI voices can be nudged through a change in MINDSET

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AI-assisted technology is used to create synthetic voices that are highly naturalistic, making it difficult for listeners to distinguish between real and synthetic speech. While listeners often show a bias towards classifying these voices as human, this effect is even stronger when the voices use underrepresented regional or non-standard dialects – presumably because listeners are not used to such varieties being represented by speech technology. This MINDSET - Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology – could leave some language communities more at risk of AI-voice based deception. To address this, the current study tested whether simple informational nudges could shift listeners’ default assumptions away from “Human” and increase their vigilance towards categorising voices as “AI”. Experiment 1 (N = 150) investigated whether nudges outlining AI’s ability to produce (Scottish) accents and dialects would affect human categorisation responses. The results demonstrated a significant reduction in “Human” responses for a nudge outlining AI’s capabilities at authentically producing these varieties. In Experiment 2 (N = 150), a vigilance-based nudge warning about the risks of AI deception was tested alone and in combination with the capability message. Only the capability-based nudge had a measurable effect, suggesting that updating expectations about what AI can convincingly reproduce is more effective than simply warning listeners to be cautious. As AI voice technology becomes more widespread, such nudges may offer a low-cost strategy for increasing vigilance - particularly in communities whose language varieties have been historically marginalised or excluded from speech technology systems.

Article activity feed