Have a personal or library account? Click to login
A Novel Fast Feedforward Neural Networks Training Algorithm Cover

Abstract

In this paper1 a new neural networks training algorithm is presented. The algorithm originates from the Recursive Least Squares (RLS) method commonly used in adaptive filtering. It uses the QR decomposition in conjunction with the Givens rotations for solving a normal equation - resulting from minimization of the loss function. An important parameter in neural networks is training time. Many commonly used algorithms require a big number of iterations in order to achieve a satisfactory outcome while other algorithms are effective only for small neural networks. The proposed solution is characterized by a very short convergence time compared to the well-known backpropagation method and its variants. The paper contains a complete mathematical derivation of the proposed algorithm. There are presented extensive simulation results using various benchmarks including function approximation, classification, encoder, and parity problems. Obtained results show the advantages of the featured algorithm which outperforms commonly used recent state-of-the-art neural networks training algorithms, including the Adam optimizer and the Nesterov’s accelerated gradient.

Language: English
Page range: 287 - 306
Submitted on: Feb 15, 2021
Accepted on: Jul 24, 2021
Published on: Oct 8, 2021
Published by: SAN University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2021 Jarosław Bilski, Bartosz Kowalczyk, Andrzej Marjański, Michał Gandor, Jacek Zurada, published by SAN University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.