Enhancing Multi-label Emotion Prediction through Rule-based Voting with LLM and BERT Variants
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Emotion analysis in text has become increasingly crucial for applications ranging from social media monitoring to mental health assessment. While advancements in natural language processing (NLP) have improved capabilities in this area, accurately identifying and categorizing complex emotional expressions remains a challenging task. This difficulty arises from the contextual implications and the complexity inherent in human emotions. This paper presents a novel framework that combines Large Language Models (LLMs) and BERT variants through an adaptive rule-based voting mechanism for robust multi-label emotion analysis. Our approach introduces three key components: (1) an adaptive weighted voting strategy that dynamically adjusts model contributions based on confidence scores, (2) a sophisticated prompt engineering technique that enables LLMs to better understand emotional context through template-based approaches, and (3) a hybrid decision-making mechanism that effectively integrates the complementary strengths of both LLM and BERT architectures through rule-based aggregation. Experimental results on the SemEval-2025 Task 11 (Track A) test set demonstrate that our proposed method achieves a macro F1 of 80.42% and micro F1 of 82.33%, outperforming the strongest individual transformer architecture (DeBERTa) by 9.8% and 7.4% respectively, and highest-performing LLM method (SFT Data-Augmented) by 2.1% and 1.7% respectively. Notably, our system shows particular strength in handling complex emotional expressions and ambiguous contexts, with consistent improvements across all five emotion categories, particularly excelling in fear detection (86.97% F1-score) and demonstrating robust performance on challenging low-frequency emotions like anger (74.62% F1-score).