Ethical AI in Customer Segmentation: An Explainability, Fairness, and Behavioral Autonomy Framework

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI)-driven customer segmentation provides powerful commercial capabilities but at the same time creates multi-dimensional ethical risks which include algorithmic bias, opacity, privacy erosion and behavioural manipulation. Despite increasing regulation and scholarly interest in the topic, there is still no integrated, empirically validated framework addressing these risks in business intelligence (BI) practice. This research builds and validates the Ethical-by-Design Business Intelligence (EDBI) framework, which increases embeddings of Explainable AI (XAI), algorithmic fairness auditing, privacy-preserving analytics, and behavioural autonomy protection - systematically in the lifecycle of customer segmentation Two novel constructs are introduced, the Ethical Segmentation Score (ESS), a composite governance index operationalising the concepts of transparency, fairness, privacy and accountability and the Behavioural Autonomy Index (BAI), measuring the perceived manipulation, decision independence and awareness of algorithms. Employing a sequential multi-phase design combining qualitative exploration (n = 25 professionals), computational experimentation on a synthetic e-commerce dataset (50,000 records), as well as a behavioural experiment (n = 210) finds that ethical-by-design segmentation systems are significantly more trustworthy, fair and personallyisation acceptable (Cohen's d range: 1.50–1.82, p<.001). The framework is aligned with general data protection regulation (GDPR), EU AI Act (2024) and India's Digital Personal Data Protection (DPDP) Act (2023) which results in a functional implementation roadmap for organisations.

Article activity feed