Have a personal or library account? Click to login

Figures & Tables

Figure 1.

The robot chassis consists of the robotic vehicle base station and a high-top frame for mounting the depth camera and the disinfection electrostatic sprayer. A depth camera is mounted on top of the frame, and a disinfection subsystem is mounted inside the frame, culminating in the spray nozzle
The robot chassis consists of the robotic vehicle base station and a high-top frame for mounting the depth camera and the disinfection electrostatic sprayer. A depth camera is mounted on top of the frame, and a disinfection subsystem is mounted inside the frame, culminating in the spray nozzle

Figure 2.

The diagram illustrates the system architecture of the autonomous disinfection robot. The system comprises the machine vision subsystem for human detection, the navigation subsystem for robot navigation and obstacle avoidance, the disinfection subsystem for disinfection tasks, the decision support system for data analysis and optimization, and the mobile application subsystem for user interaction and operation. The arrows indicate the data flow (blue) and control flow (yellow) between the components
The diagram illustrates the system architecture of the autonomous disinfection robot. The system comprises the machine vision subsystem for human detection, the navigation subsystem for robot navigation and obstacle avoidance, the disinfection subsystem for disinfection tasks, the decision support system for data analysis and optimization, and the mobile application subsystem for user interaction and operation. The arrows indicate the data flow (blue) and control flow (yellow) between the components

Figure 3.

The disinfection robot system displays the environment, the AGV’s position, and options for disinfection and navigation
The disinfection robot system displays the environment, the AGV’s position, and options for disinfection and navigation

Figure 4.

YOLOv5 architecture overview: (a) Backbone with CSP bottleneck (BCSP) and SPP modules for feature extraction, (b) Neck with PANet structure for feature fusion, and (c) Head with Conv1x1 layers for final object detection
YOLOv5 architecture overview: (a) Backbone with CSP bottleneck (BCSP) and SPP modules for feature extraction, (b) Neck with PANet structure for feature fusion, and (c) Head with Conv1x1 layers for final object detection

Figure 5.

Evaluation of the fine-tuned YOLOv5 Nano human detector algorithm/methodology across various confidence thresholds: (a) Recall–Confidence curve, showing maximum recall of 0.91 at thresholds near 0.00. (b) Precision–Confidence curve, with peak precision of 1.00 at a 0.94 threshold. (c) Precision–Recall curve, yielding mAP@0.5 0.795. (d) F1–Confidence curve, with the highest F1 value of 0.77 at a 0.31 threshold
Evaluation of the fine-tuned YOLOv5 Nano human detector algorithm/methodology across various confidence thresholds: (a) Recall–Confidence curve, showing maximum recall of 0.91 at thresholds near 0.00. (b) Precision–Confidence curve, with peak precision of 1.00 at a 0.94 threshold. (c) Precision–Recall curve, yielding mAP@0.5 0.795. (d) F1–Confidence curve, with the highest F1 value of 0.77 at a 0.31 threshold

Figure 6.

This figure illustrates the detection of individuals in an indoor environment using YOLOv5 and the RealSense stereo depth camera. The bounding boxes highlight each detected person, and the distance from the camera is also annotated. The visual output demonstrates the system’s capacity to estimate individuals’ distances in real time
This figure illustrates the detection of individuals in an indoor environment using YOLOv5 and the RealSense stereo depth camera. The bounding boxes highlight each detected person, and the distance from the camera is also annotated. The visual output demonstrates the system’s capacity to estimate individuals’ distances in real time

Figure 7.

a)Laboratory at Aristotle University of Thessaloniki (70 m2), showing accurate reconstruction of walls, doorways, and fixed obstacles within a 0.05 m resolution grid. b) Classroom at International Hellenic University (60 m2), demonstrating a consistent map quality across different indoor environments
a)Laboratory at Aristotle University of Thessaloniki (70 m2), showing accurate reconstruction of walls, doorways, and fixed obstacles within a 0.05 m resolution grid. b) Classroom at International Hellenic University (60 m2), demonstrating a consistent map quality across different indoor environments

Figure 8.

The following illustration depicts the human detection and navigation process. The upper display depicts the map of the environment with detected human positions, while the lower left and right windows illustrate real-time object detection and distance estimation for identified individuals
The following illustration depicts the human detection and navigation process. The upper display depicts the map of the environment with detected human positions, while the lower left and right windows illustrate real-time object detection and distance estimation for identified individuals

Figure 9.

Following a two-hour inspection, the locations identified requiring further investigation were marked on the diagram. The circles’ color represents the risk level associated with each area. In contrast, the size of the circles indicates the extent of the area in which human movements were observed
Following a two-hour inspection, the locations identified requiring further investigation were marked on the diagram. The circles’ color represents the risk level associated with each area. In contrast, the size of the circles indicates the extent of the area in which human movements were observed
DOI: https://doi.org/10.14313/jamris-2025-020 | Journal eISSN: 2080-2145 | Journal ISSN: 1897-8649
Language: English
Page range: 1 - 12
Submitted on: Mar 12, 2025
Accepted on: Apr 23, 2025
Published on: Sep 10, 2025
Published by: Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Georgios Karamitsos, Vasileios Sidiropoulos, Evangelos Syrmos, Athanasios Sidiropoulos, Xenofon Karamanos, Dimitrios Vlachos, Dimitrios Bechtsis, published by Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.