Using AI to Generate Affective Images: Methodology and Initial Library
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
We introduce a human-in-the-loop pipeline for creating context-aware (e.g., culture, sex, and age) affect induction images and the initial Library of AI-Generated Affective Images (LAI-GAI). Current limitations in image-based research include weak to moderate emotional elicitation effects, limited image diversity, and minimal cultural tailoring of images. Using generative AI guided by existing datasets and emotion taxonomies, we generated 847 images and their corresponding descriptions across 12 discrete emotions, and then iteratively refined them with local cultural experts. We validated the library through six studies (total n = 2,470; 58 countries). Participants rated five types of images: (1) images from existing affective databases, (2) AI-generated images without cultural adjustments, (3) AI-generated images adjusted to specific cultural contexts, (4) AI-generated images adjusted by sex (male, female), and (5) AI-generated images adjusted by age group (childhood, adulthood, older age). The AI-generated images were as effective in eliciting affective responses as the images from existing affective databases. Culturally adjusted images were slightly more effective than unadjusted counterparts in targeting intended emotions. Sex- and age-adjusted variants produced comparable responses to their base images, demonstrating controllability without loss of affective impact. Furthermore, we calculated the smallest subjectively experienced difference for affect induction research (d’s from 0.05 to 0.29). This work demonstrates that researchers can now generate high-quality affect induction stimuli cost-effectively and at scale, and tailor them to diverse contexts—overcoming longstanding barriers and laying the groundwork for future AI-driven methodologies in affective science.