The Art of Repair in Human-Agent Conversations: A Taxonomy of Repair Strategies by Users and LLM-Based Conversational Agents
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) like ChatGPT are increasingly embedded in everyday tasks of various professions, yet LLMs' outputs often remain unreliable, ambiguous, or misleading. This paper explores how users identify and repair troubles to make LLMs make things right in situated interactions of various contexts. Using an ethnomethodological lens, we examine 21 real-world chat transcripts across diverse work contexts. Our analysis reveals a broad repertoire of repair practices, including factual corrections, stylistic refinements, implicit signals, and strategic reframings. The findings challenge the view that users’ repair work on LLM outputs is merely a response to system failure. Instead, our findings present a taxonomy of repair work of both users and conversational agents, comprising 6 types of repair initiators (errors, dissatisfactions, apologies, shortcomings, implicit signals, and contextualization), 3 stages of repair elements (6 types of trouble classification, 3 types of trouble specification, and 7 types of trouble management), and 3 types of repair processes (incremental, grounding, and validating). These repair categories demonstrate the core of human-agent collaboration: meaning and correctness are not pre-given but are achieved through situated work for all practical purposes. By treating trouble as an ordinary part of collaborative work, we highlight the need to design for user repair interaction alongside improving model reliability. These findings contribute to ongoing debates in HCI and CSCW around the accountability, intelligibility, and co-construction of meaning in human-AI interaction regarding LLMs applications.