How Does the Anthropomorphism of Automated Vehicles Shape Human Trust? A Systematic Review and Meta-Analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Trust is a critical factor in the acceptance of automated vehicles (AVs), but market data indicate that public trust in AVs remains suboptimal. Of all tested strategies, anthropomorphizing AVs in different modalities, such as adding human-like visual features or human voices, has gained attention. This meta-analysis addresses the role of anthropomorphism in AV trust as a timely response to the ongoing debate. Following PRISMA guidelines, a total of 66 articles and 70 effect sizes, from 5903 participants across seven databases, were included.The results revealed that anthropomorphic interfaces, compared to non-anthropomorphic interfaces, do not significantly improve trust in AVs (Standardized Mean Difference (SMD) = 0.04, not significant). However, in general, adding auditory anthropomorphic features is recommended for increasing trust compared to visual anthropomorphic features (SMD: 0.30 vs. -0.48). Moreover, different levels of anthropomorphism (superficial vs. deep) did not have a significant impact on trust in AVs. The overall trend indicated that with increasing age of participants, the reliance on anthropomorphic interfaces for increasing trust became more pronounced.This meta-analysis shows that adding human-like features is not a dependable way to build trust in AVs. Future works are needed on how it can be compared to other approaches like explainability. Generally, as a suggestion for stakeholders, it is important to recognize that the impact of anthropomorphic interfaces on trust in AVs is not the same as assumptions.

Article activity feed