Universal Invariant Framework for Emotion Recognition in Incomplete Multimodality

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We introduce a groundbreaking framework that addresses the challenges inherent in multimodal emotion recognition when some data channels are absent. Unlike previous approaches, our method harnesses invariant feature learning coupled with missing modality synthesis to construct robust joint representations from incomplete inputs. By employing an advanced invariant feature constraint based on central moment discrepancy (CMD) measures and a novel cross-modality synthesis mechanism, our Universal Invariant Imagination Network (UIIN) significantly narrows the modality gap and enhances recognition accuracy. Extensive evaluations on benchmark datasets demonstrate that our approach consistently outperforms state-of-the-art methods under diverse missing-modality conditions. In addition to these key innovations, our framework also integrates a series of auxiliary regularization techniques and novel loss functions that further optimize the learning process. These enhancements enable the network to more effectively reconcile disparities between modalities and to maintain stable performance even when confronted with severe data degradation. Through rigorous quantitative and qualitative assessments, we validate the capability of our approach to adapt to dynamic and unpredictable environments, thereby offering a robust solution for practical implementations in affective computing.

Article activity feed