Estimating the Initial Weights in Artificial Neural Networks for Univariate Time Series with Kernel Density Estimator
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The most significant disadvantage of machine learning and artificial intelligence modeling methods, unlike parametric approaches, is their inability to always provide definitive results especially for time series forecasting tasks. Despite their capability to produce highly consistent predictions, these modeling approaches can yield different results even when executed with the same hyper-parameter design. Artificial neural networks as being one of the most preferred and important among modelling methods, the random determination of initial weights leads to start from different points in a very large solution space each time, resulting in an extended optimization process and the production of different outcomes each time a neural network trained. The essential purpose of time series modelling with neural networks is shorten the optimization process and improve prediction performance. To achieve this, a new weight initialization approach has been developed for determining the initial parameters of artificial neural networks based on a data-driven manner using kernel density estimators. To demonstrate the performance of this approach in the fastest and the most effective way, numerous univariate time series datasets has been utilized. To test the validity of the proposed method, five simulated and six real-life time series datasets commonly preferred in literature have been used. Performance comparisons have been made with the five most preferred and fundamental weight initialization techniques in terms of test error performance, number of iterations, and pre-training dataset error. The obtained results show that the proposed method consistently outperforms other evaluated weight initialization techniques in terms of optimization efficiency and error performance.