Have a personal or library account? Click to login
Feature Map Augmentation to Improve Scale Invariance in Convolutional Neural Networks Cover

Feature Map Augmentation to Improve Scale Invariance in Convolutional Neural Networks

Open Access
|Nov 2022

Abstract

Introducing variation in the training dataset through data augmentation has been a popular technique to make Convolutional Neural Networks (CNNs) spatially invariant but leads to increased dataset volume and computation cost. Instead of data augmentation, augmentation of feature maps is proposed to introduce variations in the features extracted by a CNN. To achieve this, a rotation transformer layer called Rotation Invariance Transformer (RiT) is developed, which applies rotation transformation to augment CNN features. The RiT layer can be used to augment output features from any convolution layer within a CNN. However, its maximum effectiveness is shown when placed at the output end of final convolution layer. We test RiT in the application of scale-invariance where we attempt to classify scaled images from benchmark datasets. Our results show promising improvements in the networks ability to be scale invariant whilst keeping the model computation cost low.

Language: English
Page range: 51 - 74
Submitted on: Feb 21, 2022
Accepted on: Oct 19, 2022
Published on: Nov 28, 2022
Published by: SAN University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2022 Dinesh Kumar, Dharmendra Sharma, published by SAN University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.