Can co-designed educational interventions help consumers think critically about asking ChatGPT health questions? Results from a randomised-controlled trial
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background
Generative artificial intelligence (AI) tools have clear potential health benefits for individuals (e.g. simplifying information) and risks (e.g. inaccurate information).
Objective
To evaluate two brief co-designed health literacy interventions to help people critically reflect on health questions they ask ChatGPT.
Design
Three arm parallel-group randomised controlled trial
Participants
Australian adults without university education who had used ChatGPT in the past 6 months, and recruited via online social research panel.
Interventions
(1) animation intervention, (2) image-based intervention, or (3) control.
Main measures
The primary outcomes were intention to ask ChatGPT a question in ‘lower risk’ and ‘higher risk’ scenarios, where higher risk scenarios would typically require clinical interpretation. Secondary outcomes were ChatGPT knowledge, trust in ChatGPT’s responses, and intervention acceptability.
Key results
Of the 619 participants, 592 were included in the analysis sample. Average age was 47.0 years (SD=16.4), 42.6% identified as man or male, and 17.4% had limited/marginal health literacy. Participants in the animation group (n=191) reported lower intention to use ChatGPT for higher risk scenarios (M=2.42/5, 95%CI: 2.27 to 2.56) compared to those in the images group (n=203, M=2.69/5, 95%CI: 2.54 to 2.83, p=0.010). Participants in both intervention groups reported lower intentions to use ChatGPT for higher risk scenarios compared to control (n=205, M=3.12/5, 95%CI: 2.98 to 3.27, p<0.001). There was no effect of intervention group on intention to use ChatGPT for lower risk scenarios (p=0.800). Participants in the intervention groups had higher knowledge of ChatGPT (p<0.001) and reported lower trust (p<0.001), compared to those in the control group.
Conclusions
Brief health literacy interventions may help improve knowledge of ChatGPT and reduce intentions to ask riskier health questions. This study represents an initial step towards addressing AI health literacy and highlights the kinds of health literacy skills that can support people to navigate AI tools safely.