Have a personal or library account? Click to login
Efficient Vehicle Detection and Classification Algorithm Using Faster R-CNN Models Cover

Efficient Vehicle Detection and Classification Algorithm Using Faster R-CNN Models

Open Access
|Dec 2024

Abstract

This study proposes an integrated framework for efficient traffic object detection and classification by leveraging advanced deep-learning techniques. The framework begins with the input of video surveillance, followed by an image-acquisition process to extract the relevant frames. Subsequently, a Faster R-CNN (ResNet-152) architecture was employed for precise object detection within the extracted frames. The detected objects are then classified using deep reinforcement learning, specifically trained to identify distinct traffic entities, such as buses, cars, trams, trolleybuses, and vans. The UA-DETRAC dataset served as the primary data source for training and evaluation, ensuring the model’s adaptability to real-world traffic scenarios. Finally, the performance of the framework was assessed using key metrics, including precision, recall, and F1 score, providing insights into its effectiveness in accurately detecting and classifying traffic objects. This integrated approach offers a promising solution to enhance traffic surveillance systems and facilitate improved traffic management and safety measures in urban environments.

DOI: https://doi.org/10.14313/jamris/4-2024/33 | Journal eISSN: 2080-2145 | Journal ISSN: 1897-8649
Language: English
Page range: 86 - 93
Submitted on: Mar 16, 2022
Accepted on: Apr 3, 2024
Published on: Dec 10, 2024
Published by: Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Imad EL Mallahi, Jamal Riffi, Hamid Tairi, Mohamed Adnane Mahraz, published by Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.