Adaptive Bottleneck Architecture Search for Resource-Constrained Continual Learning
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In this paper, we propose an innovative framework for addressing the dual challenges of neural architecture selection and resource allocation in resource-constrained continual learning environments. We formulate the adaptive bottleneck architecture search (ABAS) as a mixed-integer optimization problem, allowing for the dynamic adaptation of neural architectures to evolving data streams while managing limited computational and memory resources. Our approach leverages Lagrangian relaxation techniques to effectively balance trade-offs between model accuracy, resource utilization, and knowledge retention. Comprehensive theoretical analysis and extensive empirical evaluations highlight the superiority of our method over existing continual learning strategies and neural architecture search techniques, demonstrating significant improvements in performance metrics such as accuracy, forgetting, and resource efficiency. By bridging the gap between architecture adaptability and learning efficiency, our work paves the way for more effective deployment of machine learning systems in real-world applications where resources are often constrained. Our research not only contributes novel solutions to the field of continual learning but also opens avenues for future explorations of adaptive learning systems in diverse contexts.