Have a personal or library account? Click to login
Advancing Data Quality of Marine Archaeological Documentation Using Underwater Robotics: From Simulation Environments to Real-World Scenarios Cover

Advancing Data Quality of Marine Archaeological Documentation Using Underwater Robotics: From Simulation Environments to Real-World Scenarios

Open Access
|Feb 2024

Figures & Tables

jcaa-7-1-147-g1.png
Figure 1

Spatiotemporal graph of the 3D documentation and mapping of underwater archaeological sites of diverse scales and structural complexities via marine robotic operations. (SLAM: Simultaneous localization and mapping).

jcaa-7-1-147-g2.png
Figure 2

ROV operations in shipwreck environments. Left: ROV Minerva surveying the 18th ct. wreck at Ormen Lange, Norway, 170 m deep (Courtesy of Vitenskapsmuseet, NTNU). Right: ROV control room (Courtesy of AUR-lab, NTNU).

jcaa-7-1-147-g3.png
Figure 3

Flowchart of the proposed method: The three phases of an underwater robotic operation for the documentation of UCH sites.

jcaa-7-1-147-g4.png
Figure 4

Left: A ZED stereo camera, a stereo rig of HD cameras and a GoPro camera mounted on SUB-Fighter 30K ROV. Middle: A down-looking stereo rig of GoPros mounted on a Blueye ROV. Right: Four GoPros mounted along the snake robot.

jcaa-7-1-147-g5.png
Figure 5

Left: Eelume snake robot scanning the hull of M/S Herkules wreck in a straight pose. Right: Eelume snake robot scanning the bow of M/S Herkules wreck in a U-shape.

jcaa-7-1-147-g6.png
Figure 6

Calibration of the Blueye’s ROV integrated camera.

jcaa-7-1-147-g7.png
Figure 7

The 3D model was imported into Blender, and the Principled Volume node was used for mimicking volumetric scattering. The parameters were set as follows: scattering color to RGB (0.008, 0.012, 0.264); absorption color to RGB (0.628, 0.628, 0.628); density to 0.1, and anisotropy to 0.9. Left: Real Herkules image sequence serves as the reference image. Right: The 3D model of the wreck is utilized in Blender to simulate underwater images.

jcaa-7-1-147-g8.png
Figure 8

Applying ORB-SLAM3 to real underwater footage of the Herkules wreck without IMU. a. The frequently updated point cloud and trajectory of the monocular camera. b. The green boxes are feature point regions recognized by ORB-SLAM3. Those are used to estimate camera’s location and orientation.

jcaa-7-1-147-g9.png
Figure 9

Digital Terrain Model (DTM) built from a point cloud of the WWII airplane wreck obtained by a multibeam echosounder sensor on the Eelume snake robot (Courtesy of Eelume AS).

jcaa-7-1-147-g10.png
Figure 10

Demonstration of the intuition of BPA and PSR methods. a. BPA: The black lines represent the surface in 3D. The green circle is a ‘2D ball’ used for reconstructing the surface. Notice that there is no line in between because the gap is too big compared to the diameter of the ball. b. PSR: The lines represent the surface in 3D. The light orange line indicates good support from its neighbor points; the dark orange line indicates bad support from its neighbor points. Points do not lie on the surface in general.

jcaa-7-1-147-g11.png
Figure 11

There are 2,159,476 points from the seaplane wreck (Figure 9). 10,000 points were extracted from it. The BPA and the PSR method from Open3D were applied to the extracted point cloud (Zhou et al. 2018). a. Ball Pivoting Algorithm with balls radii 0.5, 0.7, 1.0 meters. All holes indicate lack of information. b. Poisson Surface Reconstruction with the level of detail 9. The reconstruction surface’s colors indicate the lack of information from violet to yellow. Violet indicates the least supported surface patch; yellow indicates the most supported surface patch. We use the plasma color scale from matplotlib (Hunter 2007).

jcaa-7-1-147-g12.png
Figure 12

Holes extracted from 11a. a. Holes with mean locations and orientations. b. Overview of all holes and the model boundary.

jcaa-7-1-147-g13.png
Figure 13

Photogrammetric reconstruction of M/S Helma wreck. Left: Original underwater image. Middle: 3D point cloud. Right: Simulated trajectory of 30K ROV and multi-camera system recording the wreck in Blender.

Table 1

Strengths and weaknesses of the presented 3-phase workflow.

Strengths:
  • Cost effectiveness: Both the planning of operations into simulation environments and the ability for real-time data evaluation yield significant time savings on site, increased operational efficiency and minimization of the need for repeat visits.

  • Increased data quality: Adequate overlap, site coverage and detection of sensitive areas either because of high complexity or presence of obstacles, are achieved in real time, ensuring high quality data for post-processing. Multi-vision and sensor data fusion provide redundancy in data collection, the possibility for obstacle avoidance and control of geometric accuracy.

  • Reliability: The application of the three-phase workflow in real-world experiments validates the practicality and general robustness of the methods for UCH mapping.

  • Innovation: Marine archaeological research shows high adaptability towards new technologies coming from different scientific fields and industries.

  • Autonomy: Although the marine archaeologist is present during all three phases of UCH documentation, decision making is mostly held during mission time through real-time data evaluation.

Weaknesses:
  • ORB-SLAM3 system dependence on visual features: While the ORB-SLAM3 system is operational during a mission, it can falter if there are not enough visual features for a match. Although the system can resume tracking when adequate visual features are present, it initializes with a new map each time, posing continuity challenges.

  • Underutilized incremental nature of BPA: The incremental characteristic of BPA remains untapped. Development is required to harness this feature effectively.

  • Speed constraints with PSR: Even though PSR outpaces BPA for large point clouds, processing as many as two million points in 2 seconds in our tests, its efficiency may decrease with extremely large point clouds. Nevertheless, for shipwreck reconstructions, the speed is typically adequate.

  • Lack of real-time footage from the externally mounted cameras, in real-world multi-camera scenarios: The assumption of full coverage and extended field of view comes from their pre-defined configuration on the robot on the simulation phase.

  • Human operator in the loop: Currently a human operator needs to constantly track the robot’s trajectories, verify the site’s coverage and quality of images, detect obstacles, and take decisions.

DOI: https://doi.org/10.5334/jcaa.147 | Journal eISSN: 2514-8362
Language: English
Submitted on: Jan 5, 2024
Accepted on: Jan 15, 2024
Published on: Feb 21, 2024
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Eleni Diamanti, Mauhing Yip, Annette Stahl, Øyvind Ødegård, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.