Shadow AI thrives under punitive social evaluation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, offer significant performance benefits across professional tasks. Yet, their adoption in work-related contexts is complicated by social disapproval and penalties, especially under conditions of mandated transparency. In three studies (one pre-registered; n = 1,678 applicants and n = 477 evaluators), we investigate how people navigate this augmentation-approval tradeoff in an incentivized mini-job application scenario. We find that mandatory disclosure substantially reduces visible AI adoption, but prompts a covert behavioral strategy we term shadow adoption, that is, using AI in ways that avoid detection and disclosure. Strikingly, these shadow AI users produce the highest-quality applications, as rated by HR professionals who are unaware that the outputs were AI-assisted. As the knowledge about the tradeoff spreads, shadow adoption becomes more prevalent, with nearly twice as many people choosing to use shadow AI. These results reveal a misalignment between well-intended transparency rules and user incentives in work-related contexts. Policies and technologies designed to enforce ethical AI use may inadvertently encourage covert behavior, rewarding concealment over compliance.