Estimating the threat of AI-agent responding across online survey platforms

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent research and advances in LLMs have led to widespread concern that AI agents could pose as human online survey-takers. However, it remains unclear how prevalent AI agents are on these platforms and how to effectively detect AI agents. We validated a series of AI detection tests that effectively separated verified-human participants from three AI agents (designed using various prompts). Using these tests, we collected surveys on seven online platforms and find high variance in rates of participants failing AI tests, ranging from 6% to 41% across platforms (compared to a 2.4% in-person human false-positive rate). We demonstrate that undetected AI agents can impact the results of online surveys. Our findings suggest that while there is an urgent need for AI detection tests and consistent, systematic monitoring of data quality on online platforms, currently some platforms seem to provide data with a low rate of AI agents.

Article activity feed