Enhancing Convergence of Optimization Algorithms for Logistic Regression: A Breast Cancer Diagnosis Application

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper explores strategies to enhance the convergence of Newton and Trust Region methods, which are employed to obtain optimal solutions for the optimization problems underlying the Logistic Regression technique. The optimization problems consist of maximizing the likelihood function and minimizing the sum of squared errors. These are addressed using the mentioned gradient-based algorithms. Improvements include using a local search heuristic to find a good initial solution and determining the optimal learning rate at each iteration. The heuristic-generated initial points significantly improve the convergence rate of these gradient-based methods. Additionally, instead of using a fixed learning rate or step size, the optimization process dynamically computes these parameters using the Golden Search method, ensuring a more adaptive and efficient search strategy. Extensive numerical experiments and a detailed case study validate the effectiveness of this approach.

Article activity feed