Fair machine learning models for disease prediction: In-depth interviews with key health experts

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI) and machine learning (ML) pose enormous potential for improving quality of life. It can also generate significant social, cultural and other unintended risks. We aimed to explore fairness concepts that can be applied in ML models for disease prediction from key health experts’ perspectives in an ethnically diverse high-income country. In-depth interviews with key experts in the health sector in Aotearoa New Zealand (NZ) were implemented between July and December 2022. We invited participants who are key leaders in their ethnic communities, including Māori (Indigenous), Pasifika and Asian. The interview questionnaire comprised six sections: (1) Existing attitudes to healthcare allocation; (2) Existing attitudes to data held at the general practitioner (GP) level; (3) Acceptable data to have at the GP level for disease prediction models; (4) Trade-offs for obtaining benefits vs generating unnecessary concern in deploying these models; (5) Reducing bias in risk prediction models; and (6) Including community consensus into disease prediction models for fair outcomes. The study shows that participants were strongly united in the view that ML models should not create or exacerbate inequities in healthcare due to biased data and unfair algorithms. An exploration of fairness concepts showed that carefully selected data types must be considered for predictive modelling and that trade-offs for obtaining benefits versus generating unnecessary concern produced conflicting opinions. The participants expressed high acceptability for using ML models but expressed deep concerns about inequity issues and how these models might affect the most vulnerable communities (such as Māori in middle-ages and above and those living in deprived communities). Our results could help inform the development of ML models that consider social impacts in an ethnically diverse society.

Article activity feed