An AI agent can complete the Attention Network Test with human-like behavioral signatures: Implications for the bot-or-not debate
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Can AI agents produce behavioral data that passes as human? This question carries direct consequences for any field that relies on online reaction time (RT) experiments. Recent proposals for bot detection emphasize distributional shape, mean-variance scaling, and trial-wise autocorrelation of RTs, though the sufficiency of these markers has been challenged. We report the iterative development and empirical evaluation of an autonomous AI agent that completes the Attention Network Test (ANT) on a live Pavlovia experiment, producing behavioral data in real time. Across seven code revisions, each informed by analysis of the agent's output, the bot achieved attention network scores within published human norms (alerting = 65.1ms, orienting = 52.1ms, executive = 72.6ms), 95.8% accuracy, and an RT distribution exhibiting positive skew and trial-to-trial autocorrelation. We evaluated the agent against 796 human participants who completed the same ANT implementation across three university sites. The bot fell within the human range on QQ normality (z = -0.09), skewness (z = -0.77), and all three network scores, but showed elevated autocorrelation and a bimodal RT distribution from intermittent detection failures. Building this agent was technically feasible but required substantial iterative effort, experiment-specific reverse engineering, and repeated access to behavioral output. These constraints make widespread deployment unlikely for complex RT tasks in the near term, though the barrier will lower as agentic AI tools mature.