Who Falls Into the Trap of Misleading AI-Generated Videos
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Video has long been treated as privileged evidence, yet highly realistic AI-generated videos blur the line between observation and fabrication, making it urgent to understand who is vulnerable to being drawn into misleading information environments and whether that vulnerability can be anticipated. This study examines how and why users become embedded in echo chambers centered on misleading AI-generated short videos, conceptualizing susceptibility as a dynamic risk trajectory rather than a static outcome. Using 4,580 short videos from TikTok, Instagram, Douyin, and Kuaishou, we identify 145 misleading AI-generated videos through a two-stage pipeline combining automated detection and expert verification, then analyze 1,039,564 cross-platform comments and repost interactions. Echo chamber communities are detected using Leiden clustering, and echo chamber strength is quantified with an embedding-based score and data-driven thresholding. We focus on users rather than content alone by tracking 14 cognitive, behavioral, and activity-related features updated daily over a 30-day window, and we evaluate spatiotemporal deep learning architectures for forecasting user-level transitions into echo chambers. ConvLSTM delivers the strongest predictive performance while maintaining favorable computational efficiency, and feature-importance analyses show that susceptibility is driven primarily by cognitive and behavioral signals, especially lower rationality, higher curiosity, and more intensive posting activity, whereas demographic attributes contribute little explanatory power. These findings position exposure to misleading AI-generated video as a measurable and predictable behavioral process and support platform governance strategies that prioritize early warning systems and behavior-responsive interventions over demographic profiling to mitigate emerging AI-driven misinformation risks.