The supply of workplace hygiene services has become a significant challenge, especially since the outbreak and subsequent spread of Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV-2). The pandemic profoundly affected daily life, with a substantial portion of the global population reporting disruptions to personal, professional, and economic activities due to lockdown restrictions [1].Public places and workplaces are recognized as potential hotspots for microbial transmission, emphasizing the importance of proactive hygiene measures [2, 3]. Although the immediate threat of SARS-CoV-2 has diminished, the pandemic has heightened awareness of workplace hygiene, prompting employers to adopt long-term protective measures to maintain cleanliness and reduce the spread of infectious diseases [4]. Recent studies highlight that even in the post-pandemic period, post COVID-19 conditions continue to pose serious health risks, particularly among vulnerable populations such as older adults and those with preexisting conditions, emphasizing the need for sustained hygiene and infection control measures [5]. It is therefore essential that automated solutions are implemented to maintain hygiene standards, reduce manual labor and ensure public safety.
However, the pandemic has led to a paradigm shift in how workplace hygiene is approached. Consequently, autonomous vehicles have become prominent in controlling contamination in public places, healthcare facilities and workplaces. Their capacity to carry out unmanned contactless operations has proven crucial to environmental disinfection, particularly across the National Health Service (NHS) supply chains. Recent studies have demonstrated the feasibility of cost-effective autonomous vehicles for real-world applications, showing their potential to optimize navigation and obstacle avoidance while maintaining low implementation costs [6]. These vehicles have also significantly reduced labor costs, while safeguarding workers from exposure to pathogens and hazardous disinfectant chemicals [7].
Historically, robotic systems have been integrated into rehabilitation and patient care, reducing the burden on healthcare workers. For example, assistive robots are widely used in rehabilitation settings, such as nursing homes, to increase social interaction among residents [8]. During the pandemic, robots were crucial in helping healthcare workers in their daily activities and protecting them from infection [9]. Recently, a preliminary study has highlighted the potential of mobile robots to support nursing tasks in hospital settings, demonstrating the ability to reduce contamination risks and improve efficiency in supply delivery and medication administration for patients in isolation rooms [10]. In addition, mobile robots have shown potential in home-care for the elderly or bedridden individuals by supporting tasks such as mobility, toileting, and bathing in the bed, significantly reducing caregiver workload [11]. As a result, the demand for decontamination robots in healthcare facilities has increased dramatically, reaching a market size estimated at 714.78 million USD in 2022 and projected to reach 7,697.57 million USD by 2030 at a compound annual growth rate (CAGR) of 32. 84% from 2023 to 2030 [12].
In response to emerging trends for workplace disinfection, the proposed research, funded by the EIT Health Research Grant, introduces a customized robotic vehicle designed to perform workplace disinfection protocols. The proposed system aims to minimize the consumption of disinfectant fluids and reduce the risk of contamination in areas prone to virus transmission.
The robotic system integrates several key components, including a custom disinfection mechanism, a machine vision module, and a Decision Support System (DSS) to automate the disinfection process based on the probability of infection. In addition, a graphical user interface (GUI) has been developed to assist the operator in navigation and disinfection. This targeted approach reduces the amount of disinfectants used, improves overall hygiene, and ensures efficient and effective disinfection of high-risk areas.
This study focuses not on evaluating the effectiveness of the disinfectant solvent used but on the robotic system’s technological development, assembly, and programming. By demonstrating the successful integration of the technological components, the system presents a scalable solution for continuous and preventive measures in indoor workspaces, aiming to enhance anticontamination measures and workplace safety.
Several solutions have been proposed to mitigate the impact of the SARS-CoV-2 pandemic, involving autonomous robots equipped with disinfection mechanisms. These robots can disinfect premises, thereby ensuring personal hygiene and reducing the reliance on manual labor.
Chemical disinfection robots typically use disinfectants that diffuse into the atmosphere to disinfect the air, surfaces of objects, and hard-to-reach areas. For example, Zhao et al. [13] developed a robotic system that efficiently disinfects areas contaminated with pathogenic microorganisms by using a combination of Internet of Things (IoT) technologies and chlorine dioxide disinfection through aerosol spraying. The system was aimed at improving workplace safety and reduce the need for manual labor. Similarly, Chio et al. [14] introduced a mobile robot for autonomous air and surface disinfection using the aerosolized hydrogen peroxide disinfection method, demonstrating high efficiency in an indoor office environment. Furthermore, Le et al. [15] developed an autonomous robot that uses the aerosolized hydrogen peroxide disinfection method with a target detection algorithm to disinfect the premises. The effectiveness of this design was verified through air and surface quality monitoring.
An alternative approach to robotic disinfection involves short-wave ultraviolet C (UVC) lamps. UVC light can effectively destroy the DNA/RNA of microorganisms by impeding cellular activity and replication. Robots equipped with UVC lamps have been shown to prevent infectious diseases such as SARS-CoV-2, influenza, and tuberculosis [16]. Hu et al. [17] introduced a robot with UVC lamps to disinfect high-traffic environments such as hospitals, schools, airports, and transit systems.
This robot employed the Simultaneous Localization and Mapping (SLAM) algorithm to create an occupancy grid map of the environment and image recognition to identify areas at risk of contamination. In addition, Dogru and Marques [18] developed a trajectory generation framework that formulates the disinfection path as a Euler Circuit, ensuring complete surface coverage while minimizing travel distance. Their work demonstrates the effectiveness of optimizing robot movement to balance disinfection time, energy consumption, and exposure consistency, which is crucial for improving UV-C-based robotic disinfection systems. Similarly, Camacho et al. [19] proposed a robotic platform for UVC disinfection of indoor environments. The robot was capable of autonomous operation, ensuring the maximum possible surface area without direct supervision. Building on these approaches, Liu et al. [20] introduced an advanced disinfection robot scheduling and routing framework that integrates a mixed-integer programming (MIP) model to optimize task scheduling and minimize pathogen transmission risks. Their approach dynamically adjusts disinfection timing based on real-time environmental conditions and human activity patterns, reducing unnecessary energy consumption while maximizing decontamination efficiency.
However, despite the advantages of UVC disinfection, these robots face challenges when disinfecting more complex environments with varying angles and shadowed areas. Several UVC robots have incorporated optional spray attachments to address this issue and enhance coverage in hard-to-reach areas [21]. Cao et al. [22] developed a dual-function autonomous disinfection robot that integrates UVC light with hydrogen peroxide aerosol spraying, significantly improving decontamination coverage. Their study demonstrated that combining these methods increased disinfection efficiency by 53.4 percent, effectively mitigating shadowing issues associated with UVC alone. Another significant consideration is battery consumption, as UVC robots must balance the energy usage required for disinfection and system operation. Mantelli et al. [23] presented an autonomous UVC robot that creates a dynamic radiation map of the environment. The map illustrates the energy applied to each area, allowing the robot to optimize navigation. The robot moves faster in areas with lower energy requirements, while slowing down in areas requiring higher energy to ensure adequate disinfection coverage.
In addition to general disinfection systems, robots have also been developed to target specific objects that may harbor pathogens. Ramalingam et al. [24] proposed an automated door handle cleaning technique using the Toyota HSR mobile robot platform. The robot uses a deep learning model trained to detect door handles, enabling it to generate a set of coordinates for targeted disinfection. The effectiveness of the proposed framework was validated in indoor public spaces, demonstrating its potential to improve hygiene in areas of high contact.
While numerous research teams have proposed functional and advanced features of robotic disinfection systems, an integrated solution that focuses on the real-time prioritization of high-risk areas and minimises disinfection liquid consumption is lacking. Zhao et al. (2021) proposed a semi-automatic ClO2 spraying robot that relies on remote control and fixed waypoints, without adequately addressing autonomous navigation and risk-based analysis. Chio et al. (2022) integrated aerosolized hydrogenperoxide spraying with SLAM for full-coverage path planning, but their system cannot sufficiently adapt to real-time human presence. Hu et al. (2020) employed deep learning to segment UVC-critical zones. The publication does not adequately address real-time human detection and decision support for zone prioritization. Dogru and Marques (2023) optimized UV-C trajectories under kinodynamic constraints but did not sufficiently address coverage efficiency in dynamic, human-occupied settings. Camacho et al. (2021) developed the ROS-based “COVIBOT, “ which autonomously maps and disinfects using UVC, but the prioritization of high-risk areas is not sufficiently articulated. Finally, Liu et al. introduced a mixedinteger programming scheduler to minimize infection risk, but their robot does not sufficiently showcase onboard vision and real-time adaptability.
Previous disinfection robots have demonstrated exceptional performance in individual components however, there is potential for improvement in their overall functionality. With the proposed robotic vehicle, we enhance efficiency and effectiveness by integrating real-time human detection, density-based risk scoring, adaptive electrostatic spraying, and a Graphical User Interface for the operators into an efficient robotic system. The proposed system focuses on the real-time prioritization of high-risk areas and minimises disinfection liquid consumption.The primary contribution of the proposed research is the integration of multiple technological components to develop a unified robotic system capable of real-time prioritization of high-risk areas while simultaneously minimizing the disinfection liquid consumption. The proposed system leverages the ROS2 framework for robot operation and the YOLOv5 object detection model for real-time detection of individuals within the workspace. A Decision Support System (DSS) is integrated to assess the probability of infection based on human presence, enabling precise disinfection of the identified high-risk areas. Furthermore, the system features a graphical user interface (GUI) to facilitate robot control, enhance user-robot interaction, and a custom disinfection mechanism for resource-efficient disinfectant use. This end-to-end solution is a plugand-play autonomous robotic system that requires only environment mapping and eliminates the need for extensive parameterization.
The disinfection robot is designed to identify and autonomously disinfect high-risk workplace areas by integrating navigation, human detection and disinfection mechanisms. This section provides a detailed overview of the robot’s architecture, subsystems, and modifications to optimize its performance for targeted disinfection tasks. As shown in Fig. 1, the disinfection robot is based on the Turtlebot4, a widely used open-source robotics platform for research and educational applications. Several modifications were made to adapt the Turtlebot4 for disinfection tasks. Initially, a custom wooden frame was installed to secure the spray canister containing the disinfectant liquid, preventing it from falling during movement. Additionally, the robot was equipped with a camera and LIDAR sensors to facilitate autonomous navigation and real-time image recognition to identify and disinfect high-risk areas.

The robot chassis consists of the robotic vehicle base station and a high-top frame for mounting the depth camera and the disinfection electrostatic sprayer. A depth camera is mounted on top of the frame, and a disinfection subsystem is mounted inside the frame, culminating in the spray nozzle
The robot’s architecture comprises five subsystems: the disinfection subsystem, the mobile application subsystem, the machine vision subsystem, the decision support system (DSS), and the navigation subsystem. The disinfection subsystem manages the electrostatic spraying mechanism, which disperses disinfectant over identified high-risk areas. The spray actuator is controlled by an Arduino Mega Microcontroller Unit (MCU), ensuring that the appropriate amount of disinfectant is applied based on the contamination risk in each area. The mobile application features a graphical user interface (GUI) that allows the operator to monitor the robot’s status and operations in real-time. The GUI displays key information such as the environment map, battery status, and areas with detected human activity, while allowing users to control the robot’s navigation and disinfection process as necessary.
The machine vision subsystem uses the RealSense D435 stereo depth camera and the YOLOv5 object detection model to detect and locate individuals within the workspace. By providing 3D coordinates of the detected individuals, the robot can identify areas with a high human presence. The Decision Support System (DSS) is critical in analyzing real-time human positioning data to assess contamination risks. Based on this analysis, the DSS sends commands to the robot’s navigation and disinfection systems, prioritizing high-risk areas for disinfection and optimizing the overall process. The Navigation Subsystem utilizes the RP LIDAR sensor and the Robot Operating System 2 (ROS2) to enable the robot to move autonomously through the environment. This subsystem uses the SLAM algorithm to create a real-time occupancy grid map of the workspace and ensures that the robot follows optimal paths to reach the high-risk areas identified by the DSS. The coordination of all these subsystems is managed by the Central Control Unit (CCI), which is powered by the Jetson Nano Xavier NX, an embedded System-on-Module (SoM) equipped with an integrated Graphics Processing Unit (GPU).
Together, these subsystems work in unison, enabling the robot to navigate autonomously, detect the presence of humans, assess contamination risks, and perform targeted disinfection, thereby maintaining effective hygiene.
The disinfection process involves several steps to ensure thorough and efficient coverage. The process begins with creating a detailed map of the target area using the SLAM algorithm. Initially, an operator manually navigates the robot through the environment to capture spatial data and identify relevant areas, obstacles, and pathways, which form the basis for all subsequent navigation and disinfection activities. The map is continuously updated to reflect the position of static objects, such as furniture or walls, and dynamic objects, including humans, animals, and other moving obstacles.
Once the map is created, the robot navigates the environment based on commands issued by the operator. Upon receiving a navigation command, the robot moves through the workspace, continuously scanning for human presence using machine vision.
During the process, the system processes human positioning data to evaluate the contamination risk levels in various areas. The risk levels are determined based on the frequency of human presence detected in each area, allowing for a more precise identification of high-risk zones. These high-risk areas are visually highlighted on a real-time map displayed through the web interface. The highlighted zones’ color and size correspond to the risk level: green indicates low presence, orange indicates medium presence, and red indicates high presence. This real-time visualization provides the operator with an overview of potential contamination hotspots, aiding in evaluating the disinfection process. This method is similar to Ramalingam et al. [24], who utilized deep learning to guide a mobile robot for targeted disinfection, ensuring efficient sanitation.
Following identifying high-risk areas, the robot autonomously navigates to these zones and performs targeted disinfection. Using an electrostatic sprayer, the robot applies disinfectant proportionally to the assessed risk level. In high-risk areas, the robot reduces its speed to apply more disinfectant, whereas in low-risk areas, it moves at normal speed to conserve resources. The disinfection process is considered sufficient to ensure adequate disinfectant application when the robot remains in each location for a predefined duration, with longer durations in high-risk regions and shorter durations in low-risk areas. This adaptive approach aligns with Mantelli et al. [23], who optimized UVC disinfection by dynamically adjusting robot speed based on radiation energy mapping. The robot returns to its charging station and remains idle, waiting for the operator to execute the forthcoming mission.
As shown in Fig. 2, the disinfection robot comprises five subsystems, each playing a specific role in the disinfection process and overall system architecture. The following subsections provide a detailed analysis of the key technologies and mechanisms that enable the robot to operate effectively.

The diagram illustrates the system architecture of the autonomous disinfection robot. The system comprises the machine vision subsystem for human detection, the navigation subsystem for robot navigation and obstacle avoidance, the disinfection subsystem for disinfection tasks, the decision support system for data analysis and optimization, and the mobile application subsystem for user interaction and operation. The arrows indicate the data flow (blue) and control flow (yellow) between the components
The disinfection subsystem is responsible for the disinfection process. Its primary components are the sprayer, a customized device for soluble tablets, and a microcontroller. A review of the literature [25] on disinfection sprayers identified three market-ready solutions: liquid sprayers, mist sprayers, and electrostatic sprayers. The key factor distinguishing these sprayer categories is the proportion of disinfectant applied to a contaminated area.
After conducting a comparative analysis of the various sprayers and their respective features, electrostatic sprayers were determined to be the most suitable option due to their efficiency in applying disinfectants across a wide range of surfaces, making the disinfection process more effective.
Following extensive research into the electrostatic sprayer market, the VP300ES electrostatic sprayer from Victory Innovations. In parallel with sprayer selection, a market survey was conducted to identify a suitable disinfectant solvent tablet. The factors that played a decisive role in the selection of the disinfectant tablet were the following:
- –
Effectiveness of the disinfectant against a wide range of pathogens, including such as SARS-CoV-2, and the time required to combat the pathogens.
- –
Versatility of use, considering potential applications in healthcare settings, educational facilities, rooms, entrances, and other high-touch areas.
- –
Compliance with European disinfection protocols and regulations.
- –
Safety and protection of staff and room occupants.
- –
Cost effectiveness.
Based on these considerations, the research team chose Dustbane Products Ltd’s Unitab dissolving tablets due to their effectiveness against various pathogens, including SARS-CoV-2. To integrate these components onto the robot, a custom mechanism was developed to combine the electrostatic sprayer with the dual tablet drop mechanism. This mechanism is connected to a circuit that includes the ESP32 MCU, the electrostatic sprayer, and the tablet dropping mechanism, which are connected to the Central Control Interface (CCI) via a serial connection. Upon receiving a command from the CCI, the mechanism is activated to spray disinfectant or drop a tablet into the liquid container.
To provide an interface with the end user, a graphical user interface (GUI) was developed to visualize the disinfection process and real-time data of the robot’s operations.
The GUI displays critical information, including the workplace map, the robot’s battery status, position, and orientation, the disinfection status and the most frequently visited human locations. This data is collected through sensors mounted on the robot, such as LiDAR, inertial measurement unit (IMU), and encoders.
The GUI features several functional buttons designed to manage the robot’s operations, as shown in Fig. 3. The “Initialize AGV Position “ button allows users to assign the robot’s position on the map. In contrast, the “Initialize AGV Orientation “ button adjusts the robot’s orientation. The “Move AGV “ button enables the selection of a point on the map, directing the robot to navigate towards a desired location. For disinfection tasks, the “Enable Disinfection “ button initiates the disinfection procedure of identified high-risk areas.

The disinfection robot system displays the environment, the AGV’s position, and options for disinfection and navigation
The “Drop Tablet” also releases a dissolving tablet into the liquid container. Finally, the “Go to Docking Station “ button directs the robot back to its designated home location for charging, ensuring it is ready for future tasks. The system is designed to operate with a high degree of autonomy, with the operator assuming a supervisory role primarily for initial setup and monitoring. The operator issues navigation commands and the robot navigates autonomously from a starting point to a specified location. When the DSS identifies high-risk areas, the operator issues the disinfection command, and the robot navigates to the specified location to perform the disinfection task.
Object detection is a crucial task in computer vision, focusing on identifying instances of visual objects such as people, animals, or cars within digital images. The recent surge in the development of deep learning algorithms has significantly propelled the advancement of object detection, resulting in remarkable breakthroughs and widespread adoption in applications such as autonomous driving, machine vision, and video surveillance [26]. For instance, AI-driven surveillance systems have successfully implemented deep learning-based tracking methods to enhance real-time monitoring and security, allowing for accurate detection and reidentification of individuals across different camera views [27].
Deep learning-based computer vision has also been widely used in COVID-19 prevention, such as real-time face mask detection systems that monitor compliance with public health guidelines [28]. These systems leverage convolutional neural networks (CNNs) to classify masked and unmasked individuals in public spaces accurately. Similarly, this study employs a machine vision subsystem for detecting individuals and determining their locations in an indoor environment. The system integrates an Intel RealSense D435 stereo depth camera and the YOLOv5 (You Only Look Once) object detection model [29], providing precise and efficient detection and localization.
YOLOv5 is a lightweight, real-time object detection model developed by Ultralytics in 2020. It is known for its accuracy, speed, and low computational cost. YOLOv5 uses convolutional neural networks (CNNs) to predict class probabilities of objects detected within images. The model requires only a single forward propagation for object detection, simultaneously predicting different class probabilities and the bounding boxes that encompass objects. Recent studies have demonstrated the effectiveness of YOLO-based models in real-time object detection for assistive technologies, including their application in visual assistants for the visually impaired [30], highlighting the versatility of YOLOv5 for efficient object detection in real-time environments. The architecture of YOLOv5 consists of three components: the backbone, neck, and head, as illustrated in Fig. 4. The backbone is responsible for extracting essential features from the input images. YOLOv5 uses the CSPDarknet53 backbone to enhance computational efficiency through a bottleneck Cross-Stage Partial Networks (CSP) technique. The neck serves as an intermediary that combines features from different layers to improve object detection at various scales. Finally, the head generates output predictions using an anchor-based detection strategy and the SiLU activation function to enhance learning efficiency. YOLOv5 has different versions (Nano, Small, Medium, Large, Extra-Large) to accommodate different computational needs, making it ideal for real-time applications. In this study, the YOLOv5 Nano variant was utilized.

YOLOv5 architecture overview: (a) Backbone with CSP bottleneck (BCSP) and SPP modules for feature extraction, (b) Neck with PANet structure for feature fusion, and (c) Head with Conv1x1 layers for final object detection
The YOLOv5n model was trained throughout 100 epochs on a dataset comprising 4,407 images, encompassing 11,000 instances of the “person “ category. The training was conducted with the primary objectives of precision, recall, and mean average precision (mAP@0.5) at the 1071-image validation split. By epoch 100, the model had converged to precision = 0.837, recall = 0.707, mAP@0.5 = 0.795, F1 score = 0.77, which occurs at a confidence threshold of approximately 0.31, while requiring only 4.1 GFLOPs per inference and maintaining real-time frame rates on the Jetson Nano.
Its confidence-threshold behavior is characterized using precision-recall and F1-confidence curves: the PR curve confirms the mAP@0.5 of 0.795, with a peak precision of 1.00 at a threshold of 0.94 and a peak recall of 0.91 at a near-zero threshold, and the F1 curve peaks at 0.77 using a threshold of 0.31. Together, these results demonstrate robust human detection in real time.
The plots illustrated in Fig. 5 showcase how human detection performance varies with the confidence threshold, the model’s internal probability (0–1) that a bounding box contains a person. When the threshold is very low (e.g., near 0.0), almost every box under consideration is accepted, resulting in high recall (≈0.91) but this leads to many false positives; in higher thresholds the detector becomes more selective, reducing the recall score and improving precision until it reaches 100% around a cutoff value of 0.94. The precision-recall curve (mAP@0.5 = 0.795) summarizes this tradeoff across all thresholds, and the F1 confidence curve further shows that an intermediate cutoff value of 0.31 maximizes the balance between precision and recall (F1 = 0.77). In practice, these results indicate that using a 31% confidence threshold yields the most reliable real-time detection results, capturing most of true positives while limiting false alarms.

Evaluation of the fine-tuned YOLOv5 Nano human detector algorithm/methodology across various confidence thresholds: (a) Recall–Confidence curve, showing maximum recall of 0.91 at thresholds near 0.00. (b) Precision–Confidence curve, with peak precision of 1.00 at a 0.94 threshold. (c) Precision–Recall curve, yielding mAP@0.5 0.795. (d) F1–Confidence curve, with the highest F1 value of 0.77 at a 0.31 threshold
In tandem with YOLOv5, the Intel RealSense D435 stereo depth camera extracts detected individuals’ 3D coordinates. RGB-D sensors have been widely applied for human motion tracking and posture classification, providing robust spatial awareness for intelligent systems [31]. The RealSense D435 uses two synchronized infrared cameras to capture stereoscopic images and a structured light infrared projector to create a depth map of a scene. The stereoscopic depth camera uses projection to convert 3D points to 2D pixel positions and deprojection converts 2D pixel locations with specified depth into 3D coordinates. Therefore, when YOLOv5 detects individuals and places them in a bounding box, it calculates the center point of the bounding box, and the distance between detected individuals and the camera, as illustrated in Fig. 6. This point is then deprojected to obtain the 3D coordinates of the individuals detected within the image. The 3D coordinates are transformed from the camera’s coordinate system to the robot’s map coordinate system. This transformation ensures that the robot’s occupancy grid map accurately reflects the positions of detected humans, allowing the robot to navigate to high-risk areas based on human presence autonomously. By continuously monitoring and updating the occupancy map based on human presence, the system ensures that disinfection efforts are focused on the busiest areas, optimizing disinfectant use and improving overall hygiene.

This figure illustrates the detection of individuals in an indoor environment using YOLOv5 and the RealSense stereo depth camera. The bounding boxes highlight each detected person, and the distance from the camera is also annotated. The visual output demonstrates the system’s capacity to estimate individuals’ distances in real time
To monitor and optimize the disinfection process, a Decision Support System (DSS) was developed to prioritize the sanitation of the work area. The DSS is designed to communicate and exchange information with the machine vision and navigation subsystems. Specifically, the DSS receives input data from the vision subsystem to determine the presence of individuals in the area and sends information about high-risk zones to the navigation subsystem.
The primary goal of analyzing the input data is to identify high-risk areas in the workplace and direct the robot to those areas. The input data consists of x-y coordinates describing the position of individuals in each region during the system’s operational period.
After finetuning and integrating the human detection algorithm/methodology, the DSS employs Density-Based Spatial Clustering of Applications with Noise (DBSCAN), using a neighborhood radious of ε = 0.5 meters and a minimum cluster size of minPts = 20 to group nearby detections into spatial clusters. The fuzzy approximation algorithm [32] clusters the identified location points based on density, and highlights areas with a high concentration of individuals.
Once the input data is clustered, individuals observed positions are visualized as spheres in the mobile application subsystem. The size and threshold of the spheres reflect the frequency of individuals observed in each area. Thresholds are set based on the frequency of individuals observed; if the number of points counted exceeds the set threshold, the area is classified as high risk. High-risk areas are easily identifiable to the user through more prominent spheres in the graphical interface. The clusters are divided into groups of different colors with adjustable threshold distances and radius for each proposed disinfection area.
The navigation subsystem generates a map of the surrounding environment, localizes the robot within that map, plans paths, and navigates to target locations for disinfection. The initial stage involves creating a detailed environment map using the SLAM algorithm [33]. This algorithm allows the robot to construct a map by utilizing data from LIDAR or camera sensors while maintaining awareness of its current position. The SLAM algorithm is used with 0.05 m resolution, a 5-second map update interval, and enabled loop closure over a 3-meter search radious.
This application utilizes a LIDAR sensor to collect environmental data during exploration by emitting a light beam in a 360-degree sweep.
The distance to an obstacle is calculated based on the time the light beam takes to reach the obstacle and return to the sensor. The collected data is then processed to create an occupancy grid map representing the environment, including walls, obstacles, and open spaces. After generating the environment map, the Adaptive Monte Carlo Localization (AMCL) algorithm is employed to localize the robot within the map [34]. AMCL uses a particle filter to estimate the robot’s position and orientation, which is continuously updated based on sensor data from the LIDAR and encoders. The AMCL localization is using with 500–2000 particles and is resampling on every scan to maximize pose fidelity. AMCL operates at a frequency of 20 Hz, with a maximum linear velocity of 0.26 meters per second, a maximum angular velocity of 1.0 radians per second, and goal tolerances of 0.25 meters locally and 0.5 meters globally.
Once the map is created and the robot is localized, path planning and navigation are the next steps. The robot must navigate from its current position to identify high-risk areas for disinfection, which involves a two-step approach: global path planning and local path planning. For global path planning, the A* algorithm is used. A* is a widely recognized algorithm that determines the shortest route from the current robot’s position to a target area, considering distance and obstacle avoidance to generate an optimal path [35]. The Timed Elastic Band (TEB) algorithm is used for local path planning. TEB dynamically adjusts the robot’s trajectory to account for moving obstacles and environmental changes [36], continuously refining the robot’s trajectory to ensure effective obstacle avoidance during navigation [37].
In the absence of an external ground-truth system, the validation of the mapping and localization quality was achieved through a direct comparison of the SLAM-generated occupancy grid with the known floor plan and sensor-derived scans. Specifically, the ROS-oriented map was overlaid onto the facility’s CAD layout and LiDAR point clouds, revealing that all walls, doors, and fixed obstacles aligned within one grid cell (about 0.05 m) of their surveyed positions. Furthermore, the repeated execution of “go-to-pose “ commands to stored waypoints resulted in successful navigation without any recorded collisions, thereby validating the robot’s ability to accurately estimate its position within the 60-square-metre workspace. This convincing correspondence, in which every structural feature is localized in a central map in accordance with the real-world, provides confidence in the navigation stack’s ability to produce an accurate and reliable navigation and supports real-time prioritization of the identified high-risk areas and targeted disinfection.
The research team performed experiments in the facilities and laboratories of two universities, thereby facilitating a comprehensive exploration of different environments.

a)Laboratory at Aristotle University of Thessaloniki (70 m2), showing accurate reconstruction of walls, doorways, and fixed obstacles within a 0.05 m resolution grid. b) Classroom at International Hellenic University (60 m2), demonstrating a consistent map quality across different indoor environments
The experiments were conducted in a classroom at the International Hellenic University and in the reception area of the professors’ offices at Aristotle University of Thessaloniki. Detailed maps of these premises have been provided for reference. For example, in Fig. 6(b), the classroom’s settings posed significant challenges due to numerous static obstacles, such as cabinets and components, and dynamic obstacles, like humans. As illustrated, the experiment aimed to evaluate the robot’s ability to map the environment, detect human presence, and autonomously disinfect high-risk areas based on real-time data. After an inspection period, the system identified several high-risk areas, as illustrated in Fig. The system accurately detected human presence and tracked individuals’ movements to distinguish consistently occupied areas. The web interface displayed color-coded circles to represent risk levels: green circles indicated low mobility and low risk, orange circles represented medium mobility, and red circles highlighted areas of high mobility, classified as high-risk. Out of the four regions detected, two were classified as high-risk, one as medium-risk, and one as low-risk. The robot autonomously disinfected the high-risk areas by applying more disinfectant where contamination risk was higher, while reducing disinfectant use in low-risk areas, thereby optimizing resource consumption.
In this research work, within the EIT Health Research Grand project, a customized robot has been implemented to disinfect indoor workplaces. The proposed solution integrates a machine vision system to locate visitors and workers in a specific area. It also integrates a decision-making system that selects high-risk areas for disinfection and adjusts the spraying volume based on the risk level, with higher-risk areas receiving more disinfectant. The disinfection process is initiated via a user-friendly web interface that provides the robot operator with information about high-risk areas and their size.

The following illustration depicts the human detection and navigation process. The upper display depicts the map of the environment with detected human positions, while the lower left and right windows illustrate real-time object detection and distance estimation for identified individuals

Following a two-hour inspection, the locations identified requiring further investigation were marked on the diagram. The circles’ color represents the risk level associated with each area. In contrast, the size of the circles indicates the extent of the area in which human movements were observed
The innovation of the system lies in three key aspects: (i) the integration of machine vision to monitor areas with high human concentration, (ii) the integration of an intelligent decision-making mechanism that selects high-risk areas for disinfection and adjusts the spraying volume based on the estimated infection probability, with higher-risk areas receiving more disinfectants, and (iii) the selection of the shortest route to carry out the processes. The proposed robotic system optimizes the amount of disinfectant liquid required to cover an area and the time needed to complete the disinfection process in large buildings and workspaces.
The robot was tested in an area with high human presence during validation. The performance metrics considered were map accuracy and human detection efficiency. The robot successfully mapped the environment, accurately representing the physical layout, and identified the points with the highest human presence and activity. Then it autonomously navigated to the high-risk areas to perform disinfection. The system demonstrated accurate disinfection confirming its practical application in real-world settings.
Originally, the robot uniformly sprayed the entire 30-square-meter workspace of the 60-square-meter laboratory. Integration of machine vision and the DSS resulted in a 33% reduction in disinfectant usage, as the system now selectively targets only ten high-risk 1 m2 zones for decontamination, as opposed to the previous practice of blanket spraying the entire area. Future research will focus on amendments on the robot chassis with improved fluid capacity and battery autonomy to support operations in larger areas. Additionally, integrating more nozzles will allow the robot to spray in multiple directions simultaneously or to select a specific direction based on the area layout.
While our lab trials demonstrated feasibility, real-world deployment requires maintenance and adaptation strategies. We plan endurance runs to assess battery life—anticipating recharging every 2 hours under continuous operation—and to monitor nozzle clogging or wear, scheduling tablet-drop and sprayernozzle inspections accordingly. To minimize downtime, the proposed robotic system will feature a removable battery pack and interchangeable fluid containers as spare parts. We will also implement remote vehicle management, logging usage and maintenance data, and provide on-board functionalities for utilizing multiple facility layout occupancy grid maps. Furthermore, the consortium is committed to evaluating the robustness of robotic vehicles in various real-world environments. By expanding trials beyond university laboratories to include office suites and meeting rooms with differing crowd densities, the consortium aims to ascertain the system’s mapping efficiency in each setting.
The consortium also plans to participate in key industry conferences and events, such as the European Detergents Conference and the WA Disinfection and Disinfection By-products Conference, to attract potential partners and expand the solution’s applicability. Another critical step is attempting to patent the innovative solution. This approach could pave the way for exclusive distribution rights, additional revenue through royalties, and commercial sales under the Hi-Gien brand.
