Humanlike AI Can Strengthen Women’s Belief in Sexist Stereotypes

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Can interactions with humanlike AI strengthen harmful stereotypical beliefs in people from predisposed and vulnerable groups? Anthropomorphic features have been shown to increase individuals’ perceptions of AI’s trustworthiness, but AI also is known to repeat gender stereotypes, raising the concern that anthropomorphic AI chatbots can strengthen stereotypes in individuals who are predisposed to these beliefs. Consistent with this prediction, we report results from four preregistered experiments on U.S. adults (N = 2,774) showing that politically conservative women believed the archetypal gender-math stereotype in a chatbot's response to be more accurate when the chatbot had lifelike features. The effect was mediated by perceived anthropomorphism (specifically, mind perception) and trustworthiness, and we ruled out an alternative cognitive mechanism for this effect. Neither liberal women nor conservative men showed this effect for the gender-math stereotype; however, our final experiment shows that liberal women showed the same indirect influence for a different gender stereotype that they are more predisposed to believe. In a formal model, we speculate that ideological predisposition to specific gender stereotypes and social identity threat may converge, making women more susceptible to believing sexist stereotypes asserted by anthropomorphic AI chatbots. We argue that this effect could be prevented by socially-conscious AI developers who de-anthropomorphize AI applications. Future research should test whether this effect occurs for other identity groups with stereotypes they are predisposed to believe.

Article activity feed