A Financial Multimodal Sentiment Analysis Model Based on Federated Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
With the rapid development of financial markets, accurate sentiment analysis has become increasingly crucial for market prediction and risk management. However, traditional centralized approaches face challenges in data privacy and cross-institutional collaboration. This paper proposes a novel financial multimodal sentiment analysis model based on federated learning, which integrates both textual and voice data while ensuring data privacy. The model employs a dual-branch parallel processing architecture for feature fusion and collaborative training. Experiments were conducted on a dataset containing 4,846 paired text-speech samples from financial news and analyst commentaries. Results demonstrate that our model achieves significant performance in sentiment classification, particularly excelling in neutral sentiment recognition with an accuracy of 316 correct predictions. The model shows good convergence and generalization ability while maintaining data privacy. Although challenges remain in polar sentiment classification, this study provides a new paradigm for privacy-preserving multimodal sentiment analysis in the financial domain.