Have a personal or library account? Click to login
Finding the Sweet Spot: A Study of Data Augmentation Intensity for Small-Scale Image Classification Cover

Finding the Sweet Spot: A Study of Data Augmentation Intensity for Small-Scale Image Classification

By: Windra Swastika  
Open Access
|Dec 2025

Figures & Tables

Figure 1.

CNN Architecture for CIFAR-10
CNN Architecture for CIFAR-10

Figure 2.

Performance vs. Augmentation Intensity
Performance vs. Augmentation Intensity

Figure 3.

Comprehensive performance comparison across all augmentation methods
Comprehensive performance comparison across all augmentation methods

Figure 4.

Learning curve comparison
Learning curve comparison

Figure 5.

Overfitting Gap Analysis
Overfitting Gap Analysis

Intensity Framework Validation Through Component Analysis

Intensity Range & StrategiesPerformanceTransformation DiversityParameter Impact
Baseline (0.0): No Augmentation77.49%0 transformations Only preprocessing: Resize to 224×224, ImageNet normalization No regularization benefitMaintains original data fidelity but lacks regularization capacity, leading to overfitting on training data
Light (0.09)78.80%2 transformations Conservative diversity: HorizontalFlip (p=0.5), Random Brightness Contrast (p=0.3) Minimal but effective regularizationModerate parameters balance regularization and stability
Optimal (0.49): Basic79.84%3 transformations Optimal diversity balance: RandomHorizontalFlip (p=0.5), RandomRotation (±10°), ColorJitter (brightness, contrast, saturation ±0.2)Perfect regularization-performance trade-offModerate parameters Rotation ±10°, color jitter ±0.2 range achieves optimal balance between regularization effectiveness and learning stability
Moderate (0.51): Moderate Advanced75.59%4 transformations Increased complexity: HorizontalFlip (p=0.5), ShiftScaleRotate (p=0.4), Random Brightness Contrast (p=0.4), Hue Saturation Value (p=0.3) Complexity begins to create interferenceAggressive parameters Shift/scale ±0.1, rotation ±15°, HSV modifications create increased parameter ranges that start introducing instability
Heavy (0.94-0.98): Strong Advanced, AutoAugment Style71.64%-74.01%5-6 transformations Excessive complexity: Multiple geometric transforms, destructive elements (CoarseDropout, GaussNoise), Complex photometric (GridDistortion, RandomGamma) Overwhelming learning capacityAggressive parameters Rotation ±25°, noise injection, aggressive parameter ranges (±0.2+) distort data distribution beyond model’s learning capacity

Comprehensive performance analysis across augmentation strategies

Method (Intensity Score)Val Acc (%)F1-ScoreTraining Time (s)Overfitting gap (%)
No Augmentation (0.0)77.490.774650.43.54
Basic (0.49)79.840.7971255.6-1.56
Light Advanced (0.09)78.800.786342.5-0.28
Moderate Advanced (0.51)75.590.754341.7-4.77
Strong Advanced (0.94)71.640.714343.5-13.06
AutoAugment Style (0.98)74.010.737342.2-6.83
DOI: https://doi.org/10.14313/jamris-2025-038 | Journal eISSN: 2080-2145 | Journal ISSN: 1897-8649
Language: English
Page range: 94 - 101
Submitted on: Jun 28, 2025
|
Accepted on: Aug 22, 2025
|
Published on: Dec 24, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Windra Swastika, published by Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.