AB-TC-BLATT: A Resource-Efficient Parallel System Architecture with Frozen ALBERT for Practical Chinese Sentiment Analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The exponential growth of user-generated reviews on e-commerce and social media platforms has made sentiment analysis an indispensable tool for mining consumer insights. However, performing accurate sentiment analysis on Chinese text presents unique challenges, including word segmentation ambiguity and pervasive polysemy, which are often exacerbated in practical deployment scenarios characterized by limited labeled data or constrained computational budgets. While pre-trained models like ALBERT provide powerful contextual representations, their conventional serial integration with downstream neural networks for sentiment analysis not only risks creating an information bottleneck but also leads to high computational cost and potential overfitting when training data is scarce. To address these practical issues for system deployment, this paper proposes AB-TC-BLATT, a resource-efficient and practically-oriented model featuring a novel dual-channel parallel architecture. Our core design is to keep the ALBERT parameters frozen to preserve general linguistic knowledge and ensure training efficiency, while a parallel, lightweight local channel employs trainable embeddings with TextCNN and BiLSTM-Attention to adaptively capture task-specific and Chinese-specific linguistic patterns from the available, often limited, data. A hierarchical fusion strategy then integrates these complementary features. Comprehensive experiments and a thorough system efficiency analysis on two Chinese review datasets demonstrate that AB-TC-BLATT achieves superior accuracy (94.38% and 91.43%). Crucially, it maintains robust performance under simulated low-resource conditions and exhibits strong generalization potential. More importantly, from a system engineering perspective, it significantly reduces trainable parameters (by 63.5%) and training time (by 52.8%) compared to serial fusion counterparts. The model offers an effective, efficient, and practical paradigm for building scalable and low-resource sentiment analysis systems, providing a reusable architectural template particularly suited for resource-constrained Chinese language applications.

Article activity feed