Chinese Large Language Models’ Text-to-Image Generation of Occupational Gender Stereotypes and their governance
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study conducts an algorithmic audit of two text-to-image large language model apps—Dreamina App and Kling App. The findings indicate that, despite variations in the control of occupational gender stereotypes across different models, a prevalent phenomenon of exaggerated gender ratios aligning with these stereotypes exists in AI-generated images. Notably, the exaggeration of male representation in male-dominated occupations surpasses that of female representation in female-dominated fields, leading to a disproportionately low overall representation of professional women. The quantity of generated images influences the gender ratio; specifically, multi-person images tend to exaggerate the proportion of occupational stereotypes to a lesser degree than single-person images. However, the variable of sorting position during image generation does not significantly affect the degree of stereotype reinforcement. This study provides compelling evidence for the persistence of social biases within large language models, revealing that gender stereotypes are further entrenched in the cognitive cycles of algorithms and users. To mitigate the internalization of existing social biases present in training data, it is essential to enhance public awareness and foster artificial intelligence literacy, empowering users to identify and rectify biases within complex systems.