How Does AI Disclosure Shape Trust? Unpacking the Role of Legitimacy
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As generative artificial intelligence (AI) is increasingly adopted, understanding how its usage is perceived has become crucial for theory and practice. Our investigation highlights how disclosing AI usage reduces trust by triggering legitimacy concerns arising from deviations from taken-for-granted human-centered norms. Drawing on a micro-institutional perspective, we unpack legitimacy into its dimensions and propose that they operate via three context-specific processes—perceived typicality, commitment, and authenticity—that jointly account for the erosion of trust resulting from AI disclosure. An initial structured content-analytic study of directed written interviews reveals that people indeed voice these legitimacy concerns when scrutinizing AI usage and addresses research questions about how such concerns manifest across facets. A subsequent vignette experiment shows that disclosing AI usage sequentially diminishes perceptions of typicality, commitment, and authenticity, ultimately lowering trust. A supplementary replication experiment confirms this pattern. Altogether, our investigation clarifies the paradoxical nature of transparency, advances empirical testing of legitimacy theory, and helps bridge the literatures on trust and institutional theory.