Acceptance Is Not Enough: Toward a Psychology of Calibrated GenAI use

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative AI is rapidly diffusing into organisational work, yet widespread uptake does not reliably translate into demonstrated value. We address this gap by distinguishing acceptance (favourable beliefs and intentions toward use) from effective use (goal-directed, context-appropriate use that improves task outputs). Through a mixed-methods design (combining psychological resource assessments, self-reported acceptance scales, and behavioral task performance) we investigate how individual differences in emotional, cognitive, and social skills predict these outcomes cross-sectionally (N=105) and longitudinally (N=70 over three months).Results reveal that proactivity and planning, tolerance for uncertainty, relational orientation and support-seeking, engagement and sustained focus are shared predictors of both acceptance and effective use. Planning structures interactions to prevent information overload; tolerance for uncertainty fosters innovation while requiring critical scrutiny; relational orientation enhances collaboration and audience-centric adaptation; and engagement reduces cognitive load while enabling rigorous refinement. Emotional stability and self-regulation predict only acceptance, helping users persist through AI errors, whereas exploration drive and risk-taking are uniquely linked to effective use, supporting experimentation and novel applications.Longitudinal analyses further uncover adaptive recalibration trajectories, where initial enthusiasm for GenAI may decline as users update their reliance strategies in response to errors or task demands. Notably, organizational context (particularly perceived efficacy and need fulfillment) emerges as a stronger predictor of acceptance than country-based cultural differences.These findings challenge the assumption that adoption metrics (e.g., usage volume or attitudes) suffice to capture value realization. Instead, they underscore the need for targeted interventions, such as verification routines for high-tolerance users or structured exploration frameworks for risk-takers, to bridge acceptance and effective use. By reframing GenAI adoption as a dynamic, socio-technical process, this work provides a foundation for designing environments that foster calibrated, sustainable human-AI collaboration.

Article activity feed