DecoStrat: Leveraging the Capabilities of Language Models in D2T Generation via Decoding Framework
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Current language models have achieved remarkable success in NLP tasks. Nonetheless, individual decoding methods face difficulties in realizing the immense potential of these models. The challenge is primarily due to the lack of a decoding framework that can integrate language models and decoding methods. We introduce DecoStrat, which bridges the gap between language modeling and the decoding process in D2T generation. By leveraging language models, DecoStrat facilitates the exploration of alternative decoding methods tailored to specific tasks. We fine-tuned the model on the MultiWOZ dataset to meet task-specific requirements and employed it to generate output(s) through multiple interactive modules of the framework. The Director module orchestrates the decoding processes, engaging the Generator to produce output(s) text based on the selected decoding method and input data. The Manager module enforces a selection strategy, integrating Ranker and Selector to identify the optimal result. Evaluations on the stated dataset show that DecoStrat effectively produces a diverse and accurate output, with MBR variants consistently outperforming other methods. DecoStrat with the T5-small model surpasses some baseline frameworks. Generally, the findings highlight DecoStrat’s potential for optimizing decoding methods in diverse real-world applications.