Selective Knowledge Injection via Adapter Modules in Large‐Scale Language Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper addresses key challenges in knowledge injection for large language models, including static representation, difficulty in updating, and limited domain adaptability. It proposes a dynamic fine-tuning framework designed for knowledge injection. The framework is based on parameter-efficient tuning strategies and introduces learnable adapter modules and gating mechanisms. These components enable selective integration and dynamic control of both structured and unstructured external knowledge. In the model design, a query encoder extracts semantic vectors from the input text. These vectors are matched with an external knowledge base to construct a dynamic knowledge subset. This subset guides task generation. Adapter modules and gating units are then applied across model layers to adjust knowledge enhancement. This ensures that external knowledge contributes to reasoning while preserving the model's original language capabilities. A unified joint loss function is also introduced. It coordinates the optimization between language modeling and knowledge alignment objectives. To evaluate the proposed method, the WikiHop dataset for multi-hop question answering is used. The model's behavior is analyzed under various experimental settings, including different parameter update ratios, knowledge densities, and cross-domain transfer scenarios. The results show that the method achieves strong performance on key metrics such as knowledge recall, F1 score, and inference efficiency, even with low parameter update ratios. This demonstrates the practicality and stability of the approach in dynamic knowledge integration tasks. The proposed work offers a flexible and efficient technical path for knowledge injection and domain adaptation in large language models.