Addressing climate change skepticism and inaction using human-AI dialogues
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
We ask whether facts and evidence - tailored by an AI model to address each person’s specific concerns - can address climate skepticism and inaction. Participants first described their main climate change reservation. The most prevalent were the belief that climate change has natural causes (15%), feeling overwhelmed by the problem (10%), and concerns about the economic consequences of climate policies (8%). Participants were then randomized to (1) have a conversation with a Large Language Model (LLM) that was given the goal of addressing their climate reservations, (2) discuss an irrelevant topic with the LLM (i.e., control), or (3) receive static information about the scientific consensus around climate change (i.e., “standard-of-care”). The LLM treatment significantly and substantially reduced participants’ conviction in their specific reservations, while consensus messaging did not. Both treatments had significant, albeit small, effects on general pro-climate beliefs/attitudes. Critically, however, the LLM treatment was significantly more effective - particularly for increasing willingness to make sacrifices to address climate change and donations to a pro-climate charity. The LLM primarily presented facts, evoked positive emotions, reduced psychological distance, and fostered motivation to act. It rarely invoked values or ingroup sources, and when it did, their use was associated with reduced belief change. The treatment substantially reduced Republicans’ reservations (although less than for Independents or Democrats), and roughly 35% to 40% of the LLM treatment effect persisted after one month. These findings demonstrate that it is possible to reach many climate skeptical or hesitant people with the right facts and evidence.