Strategizing AI Utilization for Psychological Literature Screening: A Comparative Analysis of Machine Learning Algorithms and Key Factors to Consider

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the rapid growth of scholarly literature, efficient AI-aided abstract screening tools are becoming increasingly important. This study evaluated nine different machine learning algorithms used in AI-aided screening tools for ordering abstracts according to their estimated relevance. We focused on assessing their performance in terms of the number of abstracts required to screen to achieve a sufficient detection rate of relevant articles. Our evaluation centered on abstracts from the field of psychology, covering different research domains within this discipline. We explored how characteristics of the screening data, such as the proportion of relevant articles, the overall frequency of abstracts, and the amount of training data influenced the effectiveness of these algorithms. A key finding was that the algorithm combining the logistic regression classifier with the SBERT feature extractor outperformed other algorithms, demonstrating both the highest efficiency and the lowest variability in performance. Nonetheless, the algorithm's performance varied across experimental conditions. We discussed the results and derived practical recommendations to guide users in the AI-aided screening process.

Article activity feed