This study introduces a machine learning (ML)-based detection framework that is configured with windowed or panoramic settings on a single, cost-efficient 360-degree passive camera for use in autonomous military unmanned ground vehicles (UGVs). Active sensor fusion systems are often costly and easily detectable. However, this study explores a passive method that boosts stealth and reduces complexity. The detection framework partitions the panoramic image to focus on localised or global scene views depending on task demands, optimising both inference resolution and processing efficiency. A dataset of CV90 and BMP-2 combat vehicles was collected and used to train and test SSD ResNet50, Faster R-CNN ResNet50 and EfficientDet D1 models within this configuration architecture. Experimental results showed that EfficientDet D1 in windowed configuration yielded the highest static detection accuracy, while Faster R-CNN in windowed configuration outperformed other models in live field deployment. The complete system was integrated into the Laykka UGV platform and assessed at Technology Readiness Level 6 (TRL 6) in representative mission-relevant environmental conditions. The results underscore the feasibility of integrating passive sensors and ML in autonomous expandable UGV systems.
© 2025 Adrian Borzyszkowski, Christian Andersson, Luca Zelioli, Paavo Nevalainen, Jukka Heikkonen, published by National Defense University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.