Multitask & Meta Learning for Language Models: Enhancing Aspect Based Sentiment Analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This chapter presents a comprehensive investigation into improving Aspect Based Sentiment Analysis (ABSA) through multitask learning, meta learning, and task sampling strategies within the framework of language models. Leveraging state-of-the-art models like BERT and XLNet, the study explores the impact of pretraining tasks, particularly Next next-sentence prediction, on ABSA performance. Through meticulous experimentation and analysis, the efficacy of sampling tasks based on an importance subtask hierarchy is demonstrated, showcasing significant enhancements over state-of-the-art benchmarks. The findings underscore the importance of incorporating diverse tasks and sampling strategies for advancing ABSA and related natural language processing tasks, offering valuable insights for future research endeavors.

Article activity feed