Improving English Machine Translation via Adversarial Transfer Learning–Based Domain Adaptation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper investigates the application of adversarial transfer learning in domain-adaptive English machine translation. The approach employs adversarial training to align source and target domain feature spaces, thereby enhancing translation quality in the absence of domain-specific labelled data and mitigating adverse transfer effects. When models trained on one dataset are applied to another with a different distribution, performance loss often occurs; domain adaptation addresses this challenge. Adversarial transfer learning provides a practical solution for ensuring generalisation across domains. Previous studies have explored supervised, semi-supervised, and unsupervised adaptation using adversarial learning, with GAN-based and gradient-reversal methods improving cross-domain translation, though robustness remains limited. The proposed model integrates feature extraction, label classification, and domain discrimination, aligning multiscale fused features through domain-invariant representations. Experimental results demonstrate an accuracy of 98.53%, significantly outperforming baselines such as Auto Gluon (88.75%) and MMD-based methods (85.46%), while also achieving superior F1 scores and reduced GPU time.

Article activity feed