AI Leadership Without Integration: Evidence of Human–AI Misalignment in Innovation Processes and Outcomes
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study examines the relationship between AI leadership, human-centered independence, and organizational innovation processes and outcomes, challenging the prevailing assumption that leadership-driven AI adoption is inherently associated with improved performance. The research draws on a dual-structured model of AI leadership—AI-driven innovation leadership (Sun) and reflective AI governance leadership (Moon)—to examine whether these approaches are associated with human capability development and innovation performance. Data were collected from 2754 respondents across diverse organizational contexts using a structured survey. The measurement model was validated through exploratory and confirmatory factor analysis, and the hypotheses were tested using structural equation modeling (SEM). The results indicate that none of the proposed positive relationships are empirically supported. Neither leadership dimension shows a statistically significant relationship with human-centered independence or innovation performance, while the only statistically significant relationship is negative, indicating that human-centered independence, when not integrated with AI, is associated with lower levels of innovation outcomes. The absence of mediation and negligible explained variance further indicate the lack of an integrated structural relationship among the examined constructs. These findings challenge linear models of AI leadership by showing that the coexistence of AI-oriented leadership and human-centered capabilities does not ensure their integration. The study proposes the AI–Human Misalignment Framework as an interpretative lens, suggesting that innovation outcomes may depend on alignment rather than the mere presence of capabilities.