Retrieval-Free Suggestion Question Generation via Large Language Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper addresses the challenge of ambiguous and poorly formulated user queries in Retrieval-Augmented Generation (RAG) based conversational systems. Current RAG systems often struggle to provide satisfactory responses to such queries, hindering user experience. To mitigate this issue, we propose a novel approach for suggestion question generation that moves beyond traditional retrieval-based methods. Our method leverages the inherent knowledge and generative capabilities of Large Language Models (LLMs) to directly generate relevant and helpful suggestion questions, without explicit document retrieval during inference. We train our models on a dedicated dataset of user queries and curated suggestion questions using a supervised learning strategy. Extensive experiments, comparing our approach against zero-shot, few-shot, and RAG-based baselines, demonstrate the superior performance of our LLM-driven method in terms of correctness, relevance, and helpfulness, further validated by human evaluations. Ablation studies and error analysis provide deeper insights into the effectiveness and limitations of our approach. The results highlight the potential of purely generative models for user query refinement and suggest a paradigm shift in suggestion question generation for conversational AI.