AI-Generated Avatars and Representational Bias: A Computational Analysis of Posthuman Embodiment
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative machine learning models increasingly produce synthetic avatars that embody posthuman forms—hybrids of biological, mechanical, and mythological elements. Yet these models operate on training data saturated with stereotypes about gender, race, and embodiment. This paper examines how text-to-image diffusion models encode and reproduce representational bias when generating "cyborg mermaid" avatars. Through computational methods including CLIP embedding analysis, color palette clustering, and prompt-output divergence measurements, I demonstrate systematic patterns of gender amplification, mechanical standardization, and erotic drift in model outputs. Our analysis reveals how latent diffusion models compress the conceptual diversity of posthuman embodiment into narrow, culturally stereotyped visual forms. I argue that these computational imaginaries shape not only aesthetic production but also the boundaries of identity construction in virtual spaces. This work contributes a methodological framework for detecting representational bias in generative models while connecting information-theoretic analysis to critical posthuman theory.