LLM Aspect Prediction: Reviewing Academic Papers from Different Aspects with Large Language Model

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Peer review is a vital process in scholarly publishing, where reviewers assess and score various aspects of a manuscript—such as novelty, clarity, and significance—based on defined evaluation criteria. This process demands substantial cognitive and time effort and remains prone to human bias and inconsistency. To address these challenges, we present LLMAspectPrediction, a framework that predicts fine-grained aspect scores of academic papers, assisting reviewers through consistent, guideline-informed assessments and offering authors actionable feedback aligned with peer review standards. The method comprises three stages. First, raw texts are organized to fit the input format. Meanwhile, a vector database enables retrieval of content-similar papers based on topic distribution using an additional corpus. Then, prompt templates grounded in peer review rubrics guide LLM-based evaluations of specific aspects. Finally, LLM-generated evaluations serve as weak supervision signals to fine-tune a pre-trained model for robust score prediction. Experiments demonstrate that our approach achieves state-of-the-art performance and that its components are essential for enhancing overall model effectiveness.

Article activity feed