Construction of a cross-domain machime translation model based on meta-learing and semlantic transfer

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In recent years, neural machine translation has significantly progressed on standard corpora. However, it still faces considerable performance degradation under domain differences between training and test corpora, manifested in problems such as semantic drift, terminology mistranslation, and style imbalance. To meet this challenge, this paper proposes a cross-domain neural translation framework that integrates meta-learning and semantic transfer mechanisms, combining language function modeling and Transformer semantic encoding to improve the model's adaptability and semantic alignment ability in low-resource target domains. The method introduces a task-level meta-learning strategy to achieve fast migration and combines contrastive learning to optimize semantic space consistency. Empirical evaluation is carried out on five cross-domain datasets, OPUS and IWSLT. The proposed model is better than eight mainstream methods regarding BLEU, TER, and CHRF, and its average improvement in BLEU is 1.1 ~ 2.3 points. Further experiments show that after introducing chapter tags and perturbation mechanisms, the model shows stronger robustness in long texts, terminology-intensive corpora, and style-switching scenarios. This study provides a modeling reference with a clear structure, theory-driven, and transferable for cross-domain translation in complex registers.

Article activity feed