Have a personal or library account? Click to login
Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets Cover

Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets

Open Access
|Dec 2017

Abstract

A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are generated. After the performance against the series is evaluated, normalized and averaged, the compromising rule is found. It consists in non-compactly inserting a few DropOut layers before the last convolutional layer. It is likely that the scheme with two or more DropOut layers fits networks of many convolutional layers for image classification problems with a plenty of features. Such a scheme shall also fit simple datasets prone to overfitting. In fact, the rule “prefers” a fewer number of DropOut layers. The exemplary gain of the rule application is roughly between 10 % and 50 %.

DOI: https://doi.org/10.1515/acss-2017-0018 | Journal eISSN: 2255-8691 | Journal ISSN: 2255-8683
Language: English
Page range: 54 - 63
Published on: Dec 27, 2017
Published by: Riga Technical University
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2017 Vadim V. Romanuke, published by Riga Technical University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.