Sparse-Interpretable Neural Architecture Search via Submodular Constraint Optimization
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In recent years, the proliferation of complex neural networks has raised significant challenges regarding model interpretability and sparsity, particularly in critical domains such as healthcare and finance. This paper addresses these issues by proposing a novel optimization framework for Neural Architecture Search (NAS) that leverages submodular constraint optimization. Our framework explicitly balances the trade-offs between model complexity, interpretability, and sparsity while maintaining competitive predictive performance. We formulate the NAS problem as a combinatorial optimization task subject to submodular constraints, ensuring that selected architectures adhere to specified interpretability and sparsity metrics. Theoretical guarantees on the existence of optimal solutions under our proposed framework are established, with robust empirical evaluations conducted on benchmark datasets demonstrating superior sparsity-interpretability trade-offs compared to existing NAS techniques. Our experimental findings underscore the effectiveness of incorporating submodular constraints in fostering reliable and efficient AI systems. This work contributes to the ongoing discourse in the AI community, offering methodologies to enhance the interpretability and efficiency of neural networks in real-world applications.