Computation of Sentence Similarity Score through Hybrid Deep Learning with a Special Focus on Negation Sentence.
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Automated answer script evaluation relies heavily on accurate sentence similarity assessment. Traditional methods often struggle with linguistic nuances, particularly negation, where misinterpretations can lead to incorrect grading and biased assessments. To address these challenges, we propose a hybrid deep learning framework designed to enhance sentence similarity detection, thereby improving the accuracy and reliability of automated evaluation systems. Our model integrates the advanced embedding capabilities of \textbf{BERT, RoBERTa, Sentence-BERT, and Word2Vec} into a unified representation. A Siamese network with a bi-directional LSTM serves as the core computational component, enabling precise similarity scoring. This approach strengthens the model’s ability to understand negated statements and complex sentence structures. We evaluated our model on a specialized dataset containing examples of negations and conjunctions. The results demonstrated a significant improvement over existing methods, achieving an AU-ROC score of 0.984. Additionally, our model outperformed baseline approaches across Mean Absolute Error (MAE), Mean Squared Error (MSE), and R² metrics. These findings confirm the model’s effectiveness and highlight its potential for enhancing automated grading systems and other applications requiring precise interpretation of textual meaning.