Cross-Lingual Transfer with Typological Constraints: A Case Study in Low-Resource NLP

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Cross-lingual transfer learning has become a cornerstone of multilingual NLP, yet performance disparities persist for low-resource languages, particularly those with typologically divergent features from high-resource source languages. This paper investigates how explicit typological constraints— derived from databases like the World Atlas of Language Structures (WALS) Dryer and Haspelmath [2013]—can guide parameter sharing and alignment in multilingual models. Building on recent work in typologically informed neural architectures Ponti et al. [2020], Bjerva and Augenstein [2021], we propose a novel adapter-based framework that conditions layer-wise transformations on syntactic and morphological features. Our experiments on three low-resource languages (Arapaho, Uyghur, and Tsez) demonstrate that typological guidance reduces negative interference and improves transfer accuracy by up to 12% compared to unconstrained baselines. We further analyze the interplay between feature granularity and model performance, drawing on insights from linguistic typology Bickel and Nichols [2017] and low-resource NLP Joshi et al. [2020].

Article activity feed