Artificial Intelligence in Broadcasting: Public Trust and Misinformation Detection in Robotic News Presentation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Purpose : Social robots are increasingly proposed for high-stakes societal roles such as journalism. This raises questions about public trust, especially for non-humanoid designs, and how source credibility affects an audience's critical evaluation of information. Methods : In a field experiment, 40 participants in Brazil interacted with an emotionally expressive, non-humanoid social robot that presented three news stories, one of which was fabricated. Pre- and post-interaction questionnaires assessed perceived trust, role acceptance, and misinformation detection. Results : The robot achieved high levels of credibility, and this trust correlated positively with its acceptance in journalistic roles. However, this credibility did not confer the ability to discern the truth; participants failed to identify the fabricated story at a rate below chance. Furthermore, the brief interaction was insufficient to significantly alter the participants' pre-existing concerns or perceived advantages of the technology. Conclusion : The findings reveal a critical 'credibility paradox', where a trusted robotic agent may inadvertently lower an audience's critical scrutiny, increasing vulnerability to misinformation. Public perceptions of robotic journalists appear to be robust and not easily swayed by short-term interactions. This study suggests that designing for credibility alone is insufficient; a parallel focus on fostering critical engagement is essential for the responsible deployment of automated agents in our information ecosystem.

Article activity feed