LCJM: Joint Modeling of Multi-Intent Spoken Language Understanding with Label Attention and Chunk Partitioning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Spoken Language Understanding (SLU) is a critical task in natural language processing and human–computer interaction, consisting of intent detection and slot filling. However, multi‑intent utterances and complex semantic structures pose significant challenges, including error propagation, noise sensitivity in long utterances, and insufficient modeling of intent–slot interactions. To address these issues, we propose LCJM (Label‑Chunk Joint Model), a novel framework that integrates Label Attention and a Chunk‑based Sliding Window mechanism. Label Attention captures intent‑slot dependencies guided by corpus‑level priors, enhancing semantic consistency and mitigating the impact of mispredicted intents. The chunking strategy divides utterances into overlapping chunks, improving local context modeling and robustness to noisy or long inputs. Additionally, we incorporate a graph‑based label interaction network to model global semantic dependencies across tasks. Extensive experiments on two benchmark datasets, MixATIS and MixSNIPS, show that LCJM achieves superior performance in both intent detection and slot filling. These findings demonstrate that LCJM effectively balances global semantic integration and local contextual awareness, providing a cognitively inspired approach for robust multi‑intent SLU.

Article activity feed