Have a personal or library account? Click to login
On Intra-Class Variance for Deep Learning of Classifiers Cover

On Intra-Class Variance for Deep Learning of Classifiers

Open Access
|Aug 2019

Abstract

A novel technique for deep learning of image classifiers is presented. The learned CNN models higher offer better separation of deep features (also known as embedded vectors) measured by Euclidean proximity and also no deterioration of the classification results by class membership probability. The latter feature can be used for enhancing image classifiers having the classes at the model’s exploiting stage different from from classes during the training stage. While the Shannon information of SoftMax probability for target class is extended for mini-batch by the intra-class variance, the trained network itself is extended by the Hadamard layer with the parameters representing the class centers. Contrary to the existing solutions, this extra neural layer enables interfacing of the training algorithm to the standard stochastic gradient optimizers, e.g. AdaM algorithm. Moreover, this approach makes the computed centroids immediately adapting to the updating embedded vectors and finally getting the comparable accuracy in less epochs.

DOI: https://doi.org/10.2478/fcds-2019-0015 | Journal eISSN: 2300-3405 | Journal ISSN: 0867-6356
Language: English
Page range: 285 - 301
Submitted on: Jan 29, 2019
Accepted on: Apr 23, 2019
Published on: Aug 28, 2019
Published by: Poznan University of Technology
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2019 Rafał Pilarczyk, Władysław Skarbek, published by Poznan University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.