The paper proposes a non-iterative training algorithm for a power efficient SNN classifier for applications in self-learning systems. The approach uses mechanisms of preprocessing of signals from sensory neurons typical of a thalamus in a diencephalon. The algorithm concept is based on a cusp catastrophe model and on training by routing. The algorithm guarantees a zero dispersion of connection weight values across the entire network, which is particularly important in the case of hardware implementation based on programmable logic devices. Due to non-iterative mechanisms inspired by training methods for associative memories, the approach makes it possible to estimate the capacity of the network and required hardware resources. The trained network shows resistance to the phenomenon of catastrophic forgetting. Low complexity of the algorithm makes in-situ hardware training possible without using power-hungry accelerators. The paper compares the complexities of hardware implementations of the algorithm with the classic STDP and conversion procedures. The basic application of the algorithm is an autonomous agent equipped with a vision system and based on a classic FPGA device.
© 2024 Damian Huderek, Szymon Szczęsny, Paweł Pietrzak, Raul Rato, Łukasz Przyborowski, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.