Have a personal or library account? Click to login
A Novel Variance Reduction Proximal Stochastic Newton Algorithm for Large-Scale Machine Learning Optimization Cover

A Novel Variance Reduction Proximal Stochastic Newton Algorithm for Large-Scale Machine Learning Optimization

Open Access
|Dec 2024

Abstract

This paper introduces the Variance Reduction Proximal Stochastic Newton Algorithm (SNVR) for solving composite optimization problems in machine learning, specifically minimizing F(w) + Ω(w), where F is a smooth convex function and Ω is a non-smooth convex regularizer. SNVR combines variance reduction techniques with the proximal Newton method to achieve faster convergence while handling non-smooth regularizers. Theoretical analysis establishes that SNVR achieves linear convergence under standard assumptions, outperforming existing methods in terms of iteration complexity. Experimental results on the "heart" dataset (N=600, d=13) demonstrate SNVR's superior performance: Convergence speed: SNVR reaches optimal solution in 5 iterations, compared to 14 for ProxSVRG, and >20 for proxSGD and ProxGD. Solution quality: SNVR achieves an optimal objective function value of 0.1919, matching ProxSVRG, and outperforming proxSGD (0.1940) and ProxGD (0.2148). Efficiency: SNVR shows a 10.5% reduction in objective function value within the first two iterations. These results indicate that SNVR offers significant improvements in both convergence speed (180-300% faster) and solution quality (up to 11.9% better) compared to existing methods, making it a valuable tool for large-scale machine learning optimization tasks.

Language: English
Page range: 84 - 90
Published on: Dec 31, 2024
Published by: Xi’an Technological University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Mohammed Moyed Ahmed, published by Xi’an Technological University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.