Have a personal or library account? Click to login

Comparison of Computer Vision and Convolutional Neural Networks for Vehicle Parking Control

Open Access
|Jun 2025

Figures & Tables

Figure 1.

Research stages
Research stages

Figure 2.

UTMACH parking
UTMACH parking

Figure 3.

Classified images
Classified images

Figure 4.

Dataset create
Dataset create

Figure 5.

ROI selection
ROI selection

Figure 6.

Convolutional neural network time-real space control
Convolutional neural network time-real space control

Figure 7.

Artificial vision time-real control space
Artificial vision time-real control space

Training parameters

ParamsValue
Datasetsown
train images800
validation images200
Learning rate0.001
Pre-trained weightsyolov5x.pt
Number of epochs500
Batch size8
Image dimensions (height x width)640 × 640

Materials used in research

ToolDescription
DAHUA IPC-HFW1430DT-STW4 MP, 2.8 mm fixed lens, 1/3” progressive CMOS sensor, H.265+ compression, 30M IR LED, DWDR, Day/Night mode (ICR), 3DNR, AWB, AGC, BLC, Mirror, IP67 outdoor protection, WiFi, MicroSD slot (256 GB)
Google ColabCloud-based execution and training environment with GPU support.
LabelImgOpen-source tool for manual image labeling
RoboflowSoftware for organizing, labeling, and transforming images.
YAMLText file format for model parameter configuration

Computer vision metrics result

PrecisionSensibility
Test 111
Test 200
Test 311
Test 410.94
Test 511
Total0.800.79

Comparison of accuracy and sensitivity of different parking space detection methods

Research StudyTechniquePrecisionSensibility
This StudyYOLO V5 (CNN)88.00%82.00%
[32]DeepLabV3+77.26%79.55% (Dice)
[33]YOLO (CNN, pixel-wise ROI)99.68% (balanced accuracy)99.68% (balanced accuracy)
[34]ResNet50 + SVM VGG1698.90% 93.40%Not specified
[35]Semantic Segmentation (CNN)96.81%97.80%
[36]mAlexNet (CNN)90.34%98.98%
[37]YOLO V4 (CNN)93.00%98.00%
[38]U-Net (CNN)99.40%92.94%
[39]YOLOv7 + IoU (CNN)90.04%82.17%
This StudyImage Segmentation (CV)80.00%79.00%
[40]Optical Flow (CV)98.80%94.40%
[41]HOG, LBP, SVM y Naive Bayes (CV)97.00%97.00%
[42]Binary Morphology y Logic (CV)76.75%99.00%
[43]Optical Flow (CV)97.90%62.40%
[44]Block Matching Algorithm (CV)93.00%46.00%
[45]Multi-clue recovery model (CV)93.21%96.84%

CNN metrics result

MetricValue
Precision0.8755
Sensibility0.8158
DOI: https://doi.org/10.14313/jamris-2025-011 | Journal eISSN: 2080-2145 | Journal ISSN: 1897-8649
Language: English
Page range: 26 - 33
Submitted on: Nov 14, 2024
Accepted on: Apr 4, 2025
Published on: Jun 26, 2025
Published by: Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
In partnership with: Paradigm Publishing Services
Publication frequency: 4 times per year

© 2025 Jonathan Aguilar Alvarado, Karina Garcia Galarza, Wilmer Rivas Asanza, Bertha Mazón Olivo, published by Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.