Experiments of A Diagnostic Framework for Addressee Recognition and Response Selection in Ideologically Diverse Conversations with Large Language Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The increasing deployment of conversational AI systems in real-world applications has brought significant attention to the challenges posed by ideological biases embedded in their outputs. The concept of a "multi-ideology hangover" addresses how conflicting ideological influences in training data persist and impact the relevance and neutrality of responses during dialogue generation. This research presents a diagnostic framework for evaluating the effects of ideological bias on addressee recognition and response selection in LLMs, using a combination of coreference resolution, topic modeling, and contextual embeddings. Through experiments involving ideologically diverse conversations, the results reveal that LLMs exhibit inconsistent behavior in ideologically charged contexts, leading to potential bias amplification and reduced accuracy in addressee recognition. The findings demonstrate the limitations of current automated evaluation techniques, demonstrating the need for more advanced bias mitigation strategies and robust evaluation methods to ensure neutrality in conversational AI systems. The study provides key insights into the underlying difficulties faced by LLMs in handling ideologically conflicting dialogues, offering a foundation for improving future conversational systems in politically and culturally sensitive environments.

Article activity feed