Attention, Emotion, and Authenticity: Eye-Tracking Evidence from AI vs. Human Visual Design

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study investigates how viewers perceive, attend to, and emotionally respond to AI-generated versus human-created visual content, integrating multimodal data from eye-tracking, facial-coding, and self-report surveys. The sample consisted of 136 undergraduate and graduate students enrolled in a graphic design program at a public university. Participants viewed a series of static and video stimuli produced either by human designers or artificial intelligence systems. Gaze behavior (fixation count, duration, and saccade length), emotional reliability (k-coefficient from RealEye facial-coding), and attitudinal evaluations were analyzed through both parametric and nonparametric statistical tests. The results reveal that human-made visuals elicited longer viewing durations (M = 7035 ms), higher fixation counts (M = 1.44), and broader spatial exploration, suggesting richer semantic and aesthetic engagement. In contrast, AI-generated images produced shorter but more focused attention patterns (M = 4945 ms) and higher but less stable emotional reactions ( k = 0.16). The correlation between fixation metrics and affective responses was non-significant (ρ = −0.015), indicating that cognitive attention and emotional resonance operate as distinct dimensions. Attitudinal data showed a 68.4% accuracy in attributing authorship, with AI visuals often misclassified as human-made reflection of perceptual authenticity bias. Participants described AI content as technically refined yet emotionally limited. These findings suggest that while AI imagery achieves perceptual salience, it still lacks the emotional intentionality and narrative coherence that characterize human creativity.

Article activity feed