Considering how machine-learning algorithms (re)produce social biases in generated faces
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Advances in computer science–specifically in the development and use of generative machine learning–have provided powerful new tools for psychologists to create synthetic human faces as stimuli, which ultimately provide high-quality photorealistic face images that have many advantages, including avoiding ethical and privacy concerns and generating face images from minoritized communities that are typically underrepresented in existing face databases. However, there are a number of ways that using machine learning-based face generation and manipulation software can introduce bias into the research process, thus threatening the validity of studies. The present article provides a summary of how one class of recently popular algorithms for generating faces–generative adversarial networks (GANs)--works, how we control GANs, and where biases (with a particular focus on racial biases) emerge throughout these processes. We discuss recommendations for mitigating these biases, as well as how these concepts manifest in similar modern text-to-image algorithms.