Joint Modeling of Intelligent Retrieval-Augmented Generation in LLM-Based Knowledge Fusion

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study addresses the insufficient connection between retrieval and generation in large-scale knowledge utilization by proposing a retrieval-augmented generation method enhanced with intelligent search algorithms. The approach encodes input queries and candidate knowledge passages into a unified semantic space and dynamically aggregates relevant knowledge through similarity measures and attention weighting, ensuring that the generation stage receives high-quality external knowledge support. A fusion module is then constructed to jointly model retrieval and query representations, enabling the generation model to dynamically use retrieved content during text generation and ensuring semantic coverage, factual consistency, and contextual coherence. A joint optimization mechanism is further introduced to simultaneously optimize retrieval loss and generation loss, strengthening the interaction between the two modules and improving overall system performance. To validate the framework, comparative experiments were conducted on a publicly available discriminative dataset, along with sensitivity analyses under different hyperparameter settings, data perturbations, and environmental configurations. The experimental results show that the proposed method outperforms baseline models on key metrics such as F1, Precision, Recall, and ACC, while also demonstrating stability and robustness across vector dimensionality, similarity measures, and retrieval index scales. These findings confirm that the proposed framework can provide more accurate, comprehensive, and consistent knowledge support in complex contexts, establishing a solid foundation for advancing integrated research on retrieval and generation.

Article activity feed