A Novel Hybrid Deep Learning IChOA-CNN-LSTM Model for Modality-Enriched and Multilingual Emotion Recognition in Social Media A Novel Hybrid Deep Learning IChOA-CNN-LSTM Model

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In the rapidly evolving field of artificial intelligence, the importance of multimodal sentiment analysis has never been more evident, especially amid the ongoing COVID-19 pandemic. Our research addresses the critical imperative to comprehend public sentiment across the multifaceted dimensions of this crisis by integrating data from various modalities such as text, images, audio, and videos sourced from platforms like Twitter. Conventional methods, which primarily focus on text analysis, often fall short in capturing the nuanced intricacies of emotional states, necessitating a more comprehensive approach. To tackle this challenge, our proposed framework introduces a novel hybrid model, IChOA-CNN-LSTM, which leverages Convolutional Neural Networks (CNNs) for precise image feature extraction, Long Short-Term Memory (LSTM) networks for sequential data analysis, and an Improved Chimp Optimization Algorithm (IChOA) for effective feature fusion. Remarkably, our model achieves an impressive accuracy rate of 97.8\%, outperforming existing approaches in the field. Furthermore, by integrating the GeoCoV19 dataset, we enable a comprehensive analysis spanning linguistic and geographical boundaries, thereby enriching our understanding of global pandemic discourse and furnishing indispensable insights for informed decision-making in public health crises. Through this holistic approach and innovative techniques, our research contributes significantly to advancing multimodal sentiment analysis, providing a robust framework for deciphering the complex interplay of emotions amidst unprecedented global challenges like the COVID-19 pandemic

Article activity feed