Hybrid Neural Tangent Kernel–SGD Optimization for Robust and Scalable Deep Learning Across Medical, Sensor, and Image Domains
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Efficient and reliable training of deep neural networks remains a major challenge, particularly in fields such as medical imaging, sensor analysis, and ecological monitoring where data are often noisy, high-dimensional, and heterogeneous. Here we introduce a hybrid optimization framework that integrates Neural Tangent Kernel (NTK) theory with Stochastic Gradient Descent (SGD). The approach leverages NTK-based initialization to stabilize early training, followed by SGD fine-tuning to enable adaptability and scalability. I evaluate this method on diverse datasets including ISIC 2018 (skin lesion analysis), NIH ChestX-ray14 (radiography), Tiny ImageNet (natural image classification), and UCI HAR (sensorbased activity recognition). Across all benchmarks, the hybrid NTK-SGD method consistently outperforms NTK-only baselines and matches or exceeds standard SGD, while delivering faster convergence and improved robustness to adversarial noise. By uniting theory-driven stability with data-driven flexibility, NTKSGD offers a generalizable, interpretable, and computationally efficient training strategy. These results highlight its potential for cross-domain deployment in medical, environmental, and industrial AI applications, where both accuracy and resilience are critical.