A nonpartisan source-grounded AI voter guide is perceived as trustworthy and affects voting intentions
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative AI tools like large language models (LLMs) are increasingly used to find and understand political information, but evidence on neutrality, accuracy, and impacts on voters remains limited. We developed a voter information chatbot grounded in a nonpartisan political information source (Ballotpedia) to answer questions about federal and state-level races in the 2024 U.S. general election. In a preregistered experiment conducted the week before the election, eligible voters in California and Texas (N = 2,474) were randomly assigned to use the chatbot or to consult their usual election information sources. Across party affiliations, participants rated the information from the chatbot as trustworthy, accurate, and unbiased, consistent with text analyses showing chatbot responses closely tracked source content. Additionally, participants assigned to use the chatbot reported higher turnout intentions, warmer affect toward supporters of the opposing party, and modest shifts in candidate vote intentions, including increased intentions to vote for Democratic candidates and, in some races, increased intentions to vote for candidates whose positions aligned with their own. In a national survey of U.S. adults (N = 2,842), 13.9% reported already using AI for voter information and 49.9% reported willingness to use a validated, nonpartisan AI voter guide, while most also reported concerns about accuracy and bias. Together, results suggest that AI voter guides grounded in nonpartisan sources could be widely adopted, can provide information voters perceive as trustworthy, and influence stated voting intentions, highlighting the importance of transparent design and independent evaluation of the accuracy and neutrality.