Do You Actually Need an LLM? Rethinking Language Models for Customer Reviews Analysis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in natural language processing, but their high computational costs raise questions about their practical utility compared to small language models (SLMs). This study presents a comprehensive comparison of SLMs and LLMs on two critical customer review analysis tasks: sentiment polarity classification and correlation analysis with product categories. We evaluate state-of-the-art SLMs (DistilBERT, ELECTRA) and LLMs (Flan-T5, Flan-UL2) using benchmark datasets, assessing accuracy, F1 scores, computational runtime, memory usage, and FLOPs. Our findings reveal that LLMs excel in sentiment polarity classification but at significantly higher computational costs, while SLMs demonstrate superior performance and efficiency in the domain-specific correlation analysis task. To optimize the trade-off between accuracy and efficiency, we propose a novel hybrid system integrating SLMs and LLMs through a tiered processing strategy. This research provides valuable insights into the strategic utilization of language models for customer review analysis, enabling businesses to make informed decisions that balance computational resources and accuracy requirements. Our results have important implications for the practical application of AI in business analytics and customer insight generation.

Article activity feed