xFE-BERT: The Way to the Interpretable Financial Text Analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As financial institutions want openness and accountability in their automated systems, the task of understanding model choices has become more crucial in the field of financial text analysis. In this study, xFE-BERT, an enhanced method is introduced that uses Feature Extracted Bidirectional Encoder Representations from Transformers (FE-BERT), an architecture based on linearization in phrase structure, to improve explainability in financial sentence prediction. The model is able to extract contextual information from financial texts that is subtle because xFE-BERT combines the BERT architecture with cutting-edge methods specif- ically designed for the financial industry. xFE-BERT offers comprehensible and interpretable insights into model predictions using a mix of feature extracted fine-tuned pre-trained BERT model and explainability approaches LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlana- tions), and Anchors. Our thorough tests on industry-standard financial datasets show that xFE-BERT provides improved transparency and outperforms existing models in terms of prediction accuracy of 98.86%. This paper paves the way for more interpretable and reliable Artificial Intelligence (AI) applications in finance, ensuring that complex models remain accountable to human scrutiny.

Article activity feed