The Role of Explainable Artificial Intelligence (XAI) in Enhancing Strategic Decision-Making: A Mixed-Methods Approach Across Industries

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The integration of explainable artificial intelligence (XAI) into organizational strategic decision-making has emerged as a critical enabler of transparency and risk mitigation. This study investigates how XAI frameworks, such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), enhance managerial trust and operational efficiency. Using a mixed-methods approach, we analyzed data from 450 decision-making instances across 15 multinational corporations, applying multiple linear regression to quantify the relationship between XAI adoption and decision accuracy (R 2  = 0.78, p < 0.001). Results indicate a 32% improvement in strategic alignment when XAI-generated insights are utilized, alongside a 25% reduction in operational risks. Cluster analysis revealed distinct organizational profiles benefiting most from XAI, particularly in high-stakes sectors like finance and healthcare. The study highlights the necessity of balancing algorithmic complexity with interpretability, addressing ethical concerns through robust validation protocols (ANOVA, F = 6.42, p = 0.012). These findings underscore XAI’s potential to bridge the gap between data-driven insights and human judgment, advocating for hybrid decision-support systems in dynamic environments. Limitations include sample homogeneity, suggesting future exploration of cross-industry applications.

Article activity feed