A Structured and Methodological Review on Multi-View Human Activity Recognition for Ambient Assisted Living

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ambient Assisted Living (AAL) leverages technology to support the elderly and individuals with disabilities. A key challenge in AAL systems is efficient human activity recognition (HAR), yet no study has systematically compared single-view (SV) and multi-view (MV) HAR. This review addresses this gap by analyzing the evolution from SV to MV-HAR, covering benchmark datasets, feature extraction methods, and classification approaches. We examine how HAR systems have transitioned to MV with advanced deep learning architectures optimized for AAL, improving accuracy and robustness. Additionally, we explore machine learning and deep learning models—including CNNs, RNNs, LSTMs, TCNs, and GCNs—as well as lightweight transfer learning techniques for resource-constrained environments. Key challenges such as data remediation, privacy, and generalization are discussed alongside potential solutions like sensor fusion and advanced learning methods. Our study provides insights into advancements and future directions, guiding the development of intelligent, efficient, and privacy-compliant HAR systems for AAL.

Article activity feed