iAdditive: Fast and Fair Feature Importance Estimation for Correlated Features Using Shapley Values

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Interpreting machine learning models fairly and efficiently remains challenging, particularly when features are correlated. Classical Shapley-based explanations can split attribution among substitutes and are often computationally demanding. This study presents iAdditive, a model-agnostic approach that promotes fairness by grouping highly dependent features and allocating a shared contribution, and improves efficiency via a dynamic coalition heuristic inspired by additive explanation methods. Experiments on simulated datasets with known structure and on NHANES indicate that iAdditive produces faithful global attributions under correlation while achieving substantial runtime reductions compared with KernelSHAP, TreeSHAP, SAGE, and exact Shapley baselines. By balancing fairness, interpretability, and efficiency, iAdditive provides a practical tool for trustworthy decision support in applications such as healthcare.

Article activity feed