Hybrid Modal Decoupled Fusion for Stable Multilingual Code Generation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multilingual code generation is a difficult task because of unstable convergence, weak semantic alignment, and task imbalance across programming languages. Current instruction fine-tuning methods often do not handle different tasks well, which leads to biased optimization and weak generalization for low-resource languages. To solve these problems, we present MFTCoder++, an improved fine-tuning framework that uses adaptive task scheduling, attention-guided optimization, adversarial regularization, and hybrid fusion of logical and syntactic parts. It changes training focus in real time, keeps gradients and attention consistent, learns task-independent features, and separates semantic logic from language syntax while joining them through a gating process. This design improves stability, semantic alignment, and transfer across languages. It also gives a more reliable and usable solution for multilingual code generation in software engineering practice.

Article activity feed