LLMs Amplify Gendered Empathy Stereotypes and Influence Major and Career Recommendations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) are increasingly deployed in highly sensitive domains such as education and career guidance, raising concerns about their potential to reproduce and amplify social biases. The present research examined whether LLMs exhibit gendered empathy stereotypes—specifically, the belief that “women are more empathetic than men”—and whether such stereotypes influence downstream recommendations. Three studies were conducted. Study 1 compared LLMs with human participants and found that across six leading LLMs, gendered empathy stereotypes were significantly stronger than those observed in humans across three facets of empathy: emotional empathy, attention to others’ feelings, and behavioral empathy. Study 2 manipulated input language (Chinese vs. English) and gender-identity priming (male vs. female), demonstrating that English prompts and female priming elicited stronger gendered empathy stereotypes. Study 3 focused on major and career recommendation tasks and revealed that LLMs systematically recommended high-empathy majors and professions to women, while directing men toward low-empathy fields. Together, these findings indicate that LLMs exhibit pronounced gendered empathy stereotypes, that these biases vary across input context, and that they can transfer into real-world recommendation scenarios. This research offers theoretical insights into bias formation in LLMs and provides practical implications for improving fairness in AI systems used in educational and career guidance.

Article activity feed