Facial Expression Recognition via Variational Inference
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Subtle variations in expressions are often difficult to capture and distinguish, which places higher demands on the model's feature extraction and discriminative capabilities. While prior FER studies have primarily focused on intra-class differences and inter-class similarities among facial images, the internal inconsistencies within a single facial image have received comparatively limited attention. In this paper, we propose a Variational Inference-based Classification Head (VICH) that subtly adjusts the contribution of features associated with different expression classes in a single image. This method encourages the model to learn the inherent inconsistency and uncertainty of facial expressions and to make decisions based on broader regional features. Furthermore, we enhance multi-stage feature fusion by incorporating layer embedding and nonlinear transformation into the baseline model, which more effectively exploits deep-layer information in the feature pyramid. Experimental results on standard FER benchmarks (e.g., RAF-DB, AffectNet, FER2013) demonstrate that our method achieves performance comparable to or superior to state-of-the-art approaches, highlighting its effectiveness in addressing fine-grained expression recognition challenges. The code for the paper can be accessed at https://github.com/lg2578/poster-var.