iFuzz-Meta: An Interpretable Fuzzy Meta-Learning Framework for EEG-Based Object Recognition and Identification

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

How does the human brain distinguish individual objects from complex visual scenes in naturalistic contexts, and how can we decode this intention from EEG. Here, we propose iFuzz-Meta, an interpretable fuzzy rule-based meta-learning framework for few-shot object decoding from EEG signals. Our approach models the cognitive transition from object recognition (OR) to object identification (OI) as a meta-task, enabling the extraction of neural representations that reflect human intention. Each fuzzy rule is grounded in a neurophysiological prototype and integrated into a lightweight encoder for structured reasoning over spatial and temporal EEG patterns. To further enhance interpretability and cross-subject generalization, we introduce a frequency-regularized center loss that embeds prior knowledge about EEG spectral properties into the learning of fuzzy prototypes. Evaluated on a 25-subject dataset across OI and OR paradigms, iFuzz-Meta significantly outperforms standard and black-box baselines in few-shot settings while offering transparent, intrinsically rule-based explanations. This work bridges the gap between subject-specific adaptability and cognitive interpretability, paving the way for intention-aware, real-world BCI applications based on natural visual perception.

Article activity feed