Impact of Hyperparameter Optimisation Techniques in Deep Learning-based Investment Predictions: An Indian ETF-based analysis
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The integration of deep learning into financial forecasting has significantly advanced predictive analytics, yet the effectiveness of these models is critically dependent on hyperparameter optimization (HPO). This study investigates the role of HPO in enhancing the predictive and financial performance of Long Short-Term Memory (LSTM) and one-dimensional Convolutional Neural Network (1D-CNN) models applied to the Nifty BeEs Exchange Traded Fund (ETF), a key proxy for the Indian equity market. Using daily log-return data from 2010 to 2025, four HPO techniques grid search, Bayesian optimization, Optuna GridSampler, and Optuna Tree-structured Parzen Estimator (TPE) were systematically compared. Evaluation metrics included Root Mean Squared Error (RMSE), directional accuracy (DA), Sharpe ratios, and computational cost. Results demonstrate that while traditional methods provide modest improvements, they fail to align statistical accuracy with financial viability. In contrast, Optuna-based approaches, particularly the TPE Sampler, significantly improved outcomes, raising LSTM accuracy to 63% and CNN accuracy to 61%, with Sharpe ratios exceeding 1.2 at minimal computational cost. These findings underscore that hyperparameter optimization is not a peripheral technical task but a strategic determinant of investment applicability, transforming deep learning models from theoretical constructs into practical forecasting engines. The study contributes to bridging methodological innovations in computer science with financial econometrics, offering actionable insights for ETF prediction in emerging markets.