Accurate and visual measurement of objects at large scales is challenging due to factors such as limited equipment, object occlusion, and ambient lighting fluctuations. Obtaining dense and accurate point cloud data through a single measurement is often difficult [1], [2]. The global stitching technique, which uses a multi-view measurement with an optical tracking algorithm, has proven successful in various scanning applications [3], [4]. This method is useful for reconstructing three-dimensional objects within a vast field of view that lack distinct markers. Precise measurements are crucial for the tracking and stitching process to transform individual measurement data into a global coordinate system and to optimize the measurement parameters by matrix correlation transformations [5], [6]. The core of optical tracking lies in the alignment of point clouds, which enables the synchronization of coordinate systems across different test angles [7], [8], [9], regardless of whether it is based on shape [10], [11] or tracking. However, as the measurement range increases and the field of view changes, both marker-based and shape-based techniques are susceptible to cumulative measurement errors. The optical tracking-based measurement approach combines global positioning with a local scanning system. The global tracking system can be achieved by stereo vision technology for a large field of view [12], [13], while the local scanning system can use laser triangulation or grating-structured light systems. The data obtained with the local scanning system can be converted into a unified coordinate system using a global tracking system. Despite the error-free accumulation and flexible measurements, calibration errors occur in the results due to the transformation of the position matrix between the measurement and tracking systems. Barone [14], [15] used a standardized conversion calibration board for the calibration. In this method, the coordinates are converted from the calibration board to the measurement system and then to the tracking system, followed by registration to calculate the transformation matrix. Other researchers have further developed this calibration method to construct various combined measurement systems. J. Shi [16], [17], [19], S. Bai [18], T. Jiang et al. [20] and Y. Zhang [21] have proposed a new approach to reconstruct large objects, which includes an elementary method. In this approach, a structured light measurement system is used as a local scanning system to reconstruct regional surfaces, while a stereo tracker is used for global registration. However, this calibration process is limited by the restriction of the common field of view, as both the tracking system and the measurement system are unable to simultaneously acquire information from the same calibration plate. This limitation renders it unsuitable for calibrations that require shared information from both systems. Furthermore, if the 3D data is distributed on a plane, the registration process becomes invalid due to insufficient dimensional information. The perspective-n-point (PnP) algorithm was developed by QY. Zhou et al. [22] to calculate the transformation matrix between the calibration board and the tracking system, taking into account the monocular visual call in the global system. The calibration process primarily involved changes in the position and pose of the calibration board. However, apart from computational instability, the PnP method is sensitive to variations in pixel intensity and measurement depth during attitude estimation and registration. Since the scale information between the local and global coordinate systems is not considered, the calibration accuracy is affected, leading to unstable calibration results. G. Xu et al. [23], [24], proposed a laser sensor with unrestricted installation positions and local attitudes, so that calibration of both the sensor and the global version system is not required. However, this approach still requires a pre-calibration of the laser plane, i.e. it only replaces the camera-to-laser coordinate system with the camera-to-plane plate transformation. In addition, a wide field of view is required to capture both flat plates and laser strips within the range of the global camera. Zhao et al. [25] have presented a simple method for calibrating non-overlapping cameras that allows specific camera position recognition. Although the conversion relationship can be established with already calibrated cameras, the accuracy varies across different positions within the field of view despite uniform distribution of these markers during use. Therefore, in cases where the calibration plates are not distributed within the high-precision field of view area, the accuracy cannot be guaranteed. In the study by Sarmadi et al. [26], two-dimensional calibration plates and three-dimensional marker points were used to estimate the three-dimensional structure and relative posture of planar markers. However, these traditional calibration methods pose a number of challenges in practice, mainly due to their inconvenient operation and the inability to guarantee the accuracy of the final calibration.
The main purpose of the tracking system is to integrate the binocular system with the multi-line laser system. This is convenient for real-time tracking and stitching of the three-dimensional data reconstructed by the laser system, which is achieved by determining the spatial position of the multi-line laser system in real time. Therefore, the calibration of the whole system includes the calibration of the binocular tracking system and the joint calibration of the multi-line laser system and the tracking ball cage. The conventional joint calibration of the multi-line laser system and the tracking ball cage is based on the plane calibration plate method, as shown in Fig. 1. It realizes the calibration between the tracking ball and the multi-line laser system by calculating the conversion relationship between the plane calibration plate and the camera. However, the use of these plane calibration plates poses great challenges, such as their size, high production cost, and inability to guarantee the plane accuracy. These limitations can lead to a large number of calibration errors and to a complex calibration process. To solve this problem, a multi-position orientations and conjugate gradient iteration method based on the cross pole is proposed to complete the joint calibration between the multi-line laser system and the tracking ball cage. The cross pole is easy to make, and the 3D coordinates of the coding points can be determined in advance using a coordinate measuring machine. The calibration of the binocular tracking system and the joint calibration between the multi-line laser system and the tracking ball cage can be completed by the precise 3D coordinates on the cross pole. The method proposed in this paper, which is based on multi-position orientations and conjugate gradient iteration, can improve the convenience of calibration and achieve higher precision of calibration to improve the final stitching accuracy.

Traditional multi-line laser system and tracking ball joint calibration schematic diagram.
The tracking system hardware essentially consists of three main components: the binocular tracking system, the multi-wire laser system, and the tracking ball cage, which is seamlessly integrated into the multi-wire laser system hardware, as shown in Fig. 2. The binocular tracking system is equipped with two industrial cameras that are mainly responsible for the real-time tracking of the ball cage within a given field of view. In contrast, the multi-wire laser system includes a binocular camera and a multi-line laser, which are primarily intended for the reconstruction of the object. The tracking ball cage, which primarily consists of several polyhedra, is able to determine the spatial and positional relationships of the multi-line laser system.

Track system structure diagram.
The STM32L151 microcontroller, as shown in Fig. 3(a), is responsible for managing the entire system. This ultra-low power device is based on the high-performance ARM® Cortex®-M4 32-bit RISC core, which can operate at frequencies of up to 80 MHz. The Cortex-M4 core features a single floating point unit (SFPU) with precision and supports all ARM single-precision data processing instructions and types. It also includes a comprehensive set of digital signal processing (DSP) instructions and a memory protection unit (MPU) to increase application security. The hardware control aspect of the system includes synchronization trigger control for the binocular tracking system camera, laser projection, and camera acquisition, and manages the multi-line laser system. In the context of multi-line laser 3D reconstruction, the main task of the controller is to initiate the laser projection and simultaneously trigger the camera to capture the laser image. It then monitors the binocular tracking system and synchronizes the camera triggers to capture the white light image. Trigger acquisition and laser projection can be performed by sending control commands through the serial port. The architecture of the controller’s motherboard is shown in Fig. 3(b), and the timing is shown in Fig. 4.

Hardware controller, (a) hardware circuit board, (b) circuit board structure diagram.

Signal timing diagram of the laser controller.
The primary circuitry of the tracking system control mechanism is divided into two main components: system timing configuration and image acquisition management. Fig. 4 illustrates the control protocol using a multi-line laser to capture three images. During the configuration of the system’s timing interval, the Set_Valid signal is activated and the sequence 0x55AA, 0x01, 0x0E, 0x0A, 0x0A, 0x0A, 0x0A, 0x2C is inscribed. This sequence includes the header, the status code, the configuration mode, the duration of the high level for seven-wire laser A, the duration of the low level for seven-wire laser A, the duration of the high level for fill light, the duration of the low level for fill light, the duration of the high level for seven-wire laser B, the duration of the low level for seven-wire laser B, and the checksum of the sequence. The successful configuration of the system timing is confirmed if the sum of the initial nine sequences matches the lower octet of the checksum. When the camera image acquisition control is performed, the Control_Valid control signal is activated and the sequence 0x55AA, 0x01, 0x06, 0x06 is inscribed to identify the frame header of the control signal, the status code, the camera image acquisition trigger mode, and the mode check bit. After checking the correct check bit, the Trigger_Valid trigger signal for image acquisition is activated and triggers the fill light, the seven-wire laser A, and the seven-wire laser B in sequence. The camera acquisition signal then triggers the capture of a white light image, a seven-wire laser A image, and a seven-wire laser B image. After these three images have been captured, the Trigger_Valid signal switches to a low level and thus leads to the reconstruction of the input image collection.
The main purpose of the tracking system is to construct a binocular system in conjunction with a multi-line laser system, for real-time tracking and alignment of the multi-line laser system to determine its spatial positional relationship. When calibrating the binocular tracking system, it is therefore essential to consider the factor of a broad field of view. Calibrating a tracking system using the traditional flat plate method can present many challenges. The main difficulty is to create a calibration board that is sufficiently precise to account for such an extensive field of view. This article therefore proposes the adoption of a cross pole-based calibration method, which is easy to make and place within the field of view. The cross pole is equipped with marked points that possess unique codes (see Fig. 5) to ensure the uniqueness of the coding across different positions and angles. By obtaining the three-dimensional coordinates of all coded points on the cross pole in advance using the third-party tools, you can position the pole at multiple locations within the field of view and initiate camera data collection. Then, the central coordinates and code values of all designated points on the cross pole are identified, taking into account their recognized central coordinates from different positions and the corresponding three-dimensional point coordinates on the cross pole. By applying bundle adjustment techniques, the internal and external parameters of the binocular tracking system can be accurately calibrated.

Binocular tracking system and cross pole.
Based on the pinhole imaging principle of cameras, when using an ideal optical camera for photography, the three-dimensional point of the object, the center of the lens and the image point align in a straight line. This alignment represents the ideal linear imaging model of the cameras, which can be formally represented with a homogeneous coordinate matrix.
Here (r1,⋯,r9) are 9 components representing the third-order rotation matrix R and (tx, ty, tz) represents the three components of the translation vector.
In the practical application of the imaging system, the deviation of the image point caused by the lens optical distortion of the lens must be considered to satisfy the three-point collinear equation, which includes the square object point, the lens center, and the image point.
Here (u, v) and (u0, v0) are respectively the coordinates of the image point and the main point in the image pixel coordinate system, (x, y) are the coordinates of the image point in the physical coordinate system of the image, and (Δx, Δy) is the image point deviation caused by the optical distortion of the camera.
Here : r2 = x2 + y2.
The camera parameters, which are determined on the basis of the sequence image orientation and the three-dimensional spatial points obtained by reconstruction, deviate from their true values due to the influence of noise on the acquired image. To minimize these errors, a self-checking correction beam adjustment method was used for the non-linear optimization of the camera parameters and the three-dimensional point coordinates in space. The objective function of this optimization, which is based on the principle of non-linear least squares, can be expressed as follows.
Once the binocular tracking system has been successfully calibrated, the next step towards multi-line laser unmarked point stitching is to complete the joint calibration between the tracking ball and the multi-line laser system. This joint calibration is essentially about establishing a conversion relationship between the two coordinate systems, which ultimately facilitates the translation of the three-dimensional coordinates of the reflective mark points from the tracking ball coordinate system to the multi-line laser coordinate system. To facilitate this, a set of tracking ball hardware is integrated into the hardware of the multi-line laser system, as shown in Fig. 6. This tracking ball hardware consists primarily of several polyhedra, each face adorned with circular marking points made of a special reflective material. Using a photogrammetry method based on multiple views, the three-dimensional coordinates of each circular marking point can be determined relative to a reference cage in which the tracking ball is located. These coordinates were then transformed in relation to the coordinate system of the cage. It is essential that the joint calibration of the tracking ball and the multi-line laser system is performed with an already-calibrated binocular tracking system. Both systems must be positioned in the same field of view to allow simultaneous observation of cross pole, as shown in Fig. 6. The established coordinate system includes the following:
Ot is the binocular tracking coordinate system,
Om is the multi-line laser coordinate system,
Ob is the tracking ball cage coordinate system, and
Oc is the cross pole coordinate system.

The conversion relationship between the multi-line laser system and the tracking ball cage.
The joint calibration process requires the calculation of the transformation relationship between several coordinate systems, which is referred to as P = R ∗ Q + T. Essentially, this involves determining the rotation matrix R and the translation matrix T that exist between two distinct coordinate systems. Let us denote P′ = {p1 p2 . . . pn} and Q′ = {q1 q2 . . . qn} as the sets of points located in these two coordinate systems. By formulating a least squares objective function, we can obtain [R, T] with the ultimate objective of seamlessly translating a three-dimensional point set from one coordinate system to another. Specifically, the error equation based on the points in p can be expressed as follows:
The aim of the joint calibration is to establish the transformation relation

Joint calibration schematic diagram.

Joint calibration of physical structure drawing.
The cross pole was positioned in the calibration field of view and the binocular tracking system was used to capture the image of the cross pole and reconstruct all coding marks. The 3D points Pt representing the encoded point in the binocular tracking coordinate system Ot were then acquired. The transformation relationship
To capture an image of the cross pole, reinforce its positioning on the multi-line laser system within its field of view. Use the multi-line laser system to reconstruct the encoded marker points on the cross pole to obtain the 3D data Pm of the cross pole relative to the laser system Om. Then, calculate the transformation relationship
After completing the second step, keep the stationary position of the multi-line laser system. Then use the binocular tracking system to capture an image of the tracking ball cage. Then calculate the 3D coordinates Qt(xi, yi, zi), i = 0...n of the ball cage within the binocular tracking coordinate system Ot by using the parameters obtained from the binocular tracking system. Finally, determine the transformation relationship
The 3D points located on the cross pole can be converted to the binocular tracking coordinate system by using two conversion matrices that depend on the values
The positioning adjustment of the multi-line laser system is performed using the aforementioned calibration procedures. The transformation relation
Using (11), the 3D coordinates Pc on the cross pole can be transformed into Pt by the binocular tracking system or into Pt′ by the multi-line laser system. Since Pt and Pt′ both represent 3D points on the cross pole that have been converted to a unified coordinate system, it follows that Pt = Pt′. Substituting (11) yields the following equation:
The parameter
The corresponding conversion matrices for each position
Let ℜ ∈ RN×M be a linear subspace with unique orthogonal complements ℜ⊥ and RN×M = ℜ ⊕ ℜ⊥, which defines a linear projection operator Fℜ:RN×M → ℜ on RN×M for any RN×M and S ∈ RN×M:
The function f(X) = ‖AXB − C‖2/2, which is defined in any linear space, is continuously differentiable. The following iterative procedure was formulated using the conjugate gradient method for computational purposes. The flowchart illustrating the conjugate gradient iteration method is shown in Fig. 9.
Input matrix A ∈ RN1×M1, B ∈ RN2×M2, C ∈ RN3×M3, select initial value X0 ∈ R;
Calculated by the initial value X0:
i = 0
M0 = C − AX0B
N0 = Fℜ[ATM0BT]
K0 = N0
The operation should be terminated if Mi = 0, otherwise proceed to the next step.
The following values are to be calculated:
ai = ‖Ni‖2/‖AKiB‖2
Xi+1 = Xi + aiKi
Mi+1 = C − AXi+1B = Mi − aiAKiB
Ni+1 = Fℜ(ATMi+1BT) = Ni − aiFℜ(ATAKiBBT)
βi = ‖Ni+1‖2/‖Ni‖2
Ni+1 = Mi+1 + βiNi
i = i + 1
Proceed to step 3 for evaluation in the next step.

Joint calibration flow chart.
The negative gradient vector can be projected onto the linear subspace by using the conjugate gradient iteration method in conjunction with the linear projection operator, which serves as the basis for updating the matrix in each iterative process. This approach effectively facilitates the solution process of iterative algorithms and allows a least squares approximation of the equations to any linear subspace. The conjugate gradient method is an iterative optimization algorithm that is primarily used to solve linear equations, where AXB = C represents a positive definite matrix. The method progressively approximates the solutions of these equations through iterations, generating a conjugate direction at each step that minimizes the objective function value along this direction during the search operations. In each iteration, we first calculate the negative gradient vector of the current iteration point and then project it onto the solved linear subspace using a linear projection operator to obtain a projection matrix. This projection matrix then serves as an update matrix for the subsequent iterations, allowing us to gradually approach a solution to our system’s problem. With this method we can therefore approximate our system to the least squares in any linear subspace and at the same time minimize the sum of the residual squares in this solution.
After the joint calibration of the multi-line laser system and the tracking ball, the 3D point Pmi reconstructed by the multi-line laser system can be transformed to the coordinate system of binocular tracking. The binocular tracking system, in conjunction with the multi-line laser system, serves as a comprehensive tool for object reconstruction by using the transformation matrix
The 3D data reconstructed by the multi-line laser system can be transformed to the binocular tracking coordinate system based on the joint calibration result
In 3D reconstruction with structured multi-line light, the object to be measured is illuminated by a criss-cross laser pattern consisting of seven lines. This method uses the principles of multi-line laser reconstruction and enables the simultaneous acquisition of 3D data for all fourteen laser lines. For larger objects, as shown in Fig 10, it is essential to merge the 3D data reconstructed from different positions in order to attain a holistic understanding of the entire object. For this purpose, a binocular tracking system is used in front of the object to determine the position and orientation of the multi-line laser at each location within its field of view.

Schematic diagram of stitching the tracking system.
In order to seamlessly stitch together the final multi-line laser reconstruction data, the transformation relationships between multiple positions of the tracking ball cage within the binocular tracking system must be calculated. Starting from an initial reference position of the tracking ball, the reconstructed data of the subsequent positions are trans-formed and aligned to the coordinate system of the initial tracking ball position. Specifically, the 3D data of the tracking ball at the first position (k = 0) in the binocular tracking coordinate system serves as a reference frame.
The subsequently calculated
The conversion relationship
The three-dimensional data acquired from different locations are consistently transformed into the coordinate system of the initial position of the ball cage using the aforementioned method, the joint calibration results, and the conversion parameters of the ball cage within the binocular tracking system. This facilitates the merging of three-dimensional multi-line laser data from multiple positions.
After performing a joint calibration, the data from the 3D reconstruction of the multi-line laser system can be translated into the coordinate system of the tracking ball. Finally, the 3D reconstruction data of the multi-line laser can be aligned with the binocular tracking system, provided that the binocular tracking system recognizes and localizes the tracking ball. To determine the accuracy of the joint calibration, a standard object was captured with the multi-line laser system, and the resulting accuracy was evaluated, as shown in Fig. 11.

Accuracy verification of joint calibration.
This is because the 3D data Qj (x, y, z) (j = 0. . . n) of each coding point on the cross pole was determined in advance using a coordinate measuring machine. The 3D data
Finally, the distance difference between the 3D points is calculated in accordance with (24) to serve as an accuracy metric for evaluating the joint calibration.
The experimental phase of the tracking system includes several key procedures, including the calibration of the binocular tracking system, the joint calibration of the multi-line laser system and the tracking ball, the three-dimensional reconstruction of the multi-line laser, and the integration of the three-dimensional data by the binocular tracking system. Calibration of the binocular tracking system primarily involves adjusting the internal and external camera parameters. This involves positioning a cross pole at various points in the camera’s field of view, capturing synchronized images of the pole and then using specialized image processing algorithms to precisely determine the center coordinates and coding values of multiple markers on the pole. An optimization method for multi-position adjustment is then implemented to complete the calibration of the binocular tracking system using the collected data.
The goal of the simultaneous calibration of the multi-line laser system and the tracking ball is to create a transformation matrix that facilitates the alignment of the laser system’s coordinates with a coordinate system delineated by the tracking ball’s cage. The final calibration precision is evaluated by using cross poles as verification objects. By reconstructing these cross poles using the data from all previous calibrations, their dimensions can be compared with empirical measurements. Ultimately, the goal is to seamlessly integrate the 3D data reconstructed by the multi-line laser system into a singular coordinate system by merging the 3D data splicing technique with the binocular tracking system. This integration enables a holistic 3D reconstruction of the entire object.
The hardware components of the binocular tracking system essentially consist of two industrial cameras, two industrial lenses, adjustable LED lights for brightness control, and a cross pole. The most important specifications are listed in Table 1. In terms of image data capture, the controller shown in Fig. 3 regulates the LED light emission and synchronizes the camera captures. By using reflective coding points, the synchronized control of LED light projection onto these points during image capture ensures the precise retrieval of coding point data.
Binocular hardware parameters.
| Camera | Cross pole | ||
|---|---|---|---|
| Resolution [pixel2] | 2248×2048 | Size [mm2] | 1200×1200 |
| Focal length [mm] | 8 | Coded point number | 16 |
| Frame rate [s−1] | 90 | Coded point size [mm2] | 80×80 |
| Field range [mm] | 3000~4000 | ||
| Distance [mm] | 2000~3000 |
During the calibration process of a binocular tracking system, it is important to accurately obtain the 3D coordinates and corresponding encoded values of each point on the cross pole. The 3D coordinates of each center point on the cross pole target are pre-measured with high precision using a coordinate measuring machine (CMM). The 3D coordinates and encoded values of all encoded points on the cross pole are shown in Fig. 12(b). During the calibration process of the binocular tracking system, the cross pole is positioned in the cameras’ field of view. Then, the left and right cameras simultaneously capture images of the cross pole at different positions. Using a encoded point detection algorithm, we identify the two-dimensional coordinates of each encoded point’s center as well as their corresponding encoded values. The results of this detection on the cross pole are shown in Fig. 13. After obtaining images of all cross poles, we calculate the internal and external parameters of the binocular camera, the distortion parameters, and the transformation matrix [R, T] from the right camera’s perspective to that of the left camera using a binocular calibration algorithm described in Chapter 2. The calibration results of the binocular tracking system are shown in Table 2. When the binocular tracking system calibration is completed, it is essential to perform accuracy tests to assess its performance quality. As shown in Fig. 12(a), the exact distances between L1 and L2 on the cross pole target were pre-measured prior to calibration. Then, the cross pole was positioned at different locations within the test field of view and the images of the target were captured accordingly. Using the results of the binocular camera calibration and a binocular stereo vision reconstruction algorithm, the 3D coordinates were reconstructed for all encoded points. The reconstructed distance values for these points were then compared to their actual counterparts, revealing the error metrics associated with these distance reconstructions. For verification, 16 sets of images were acquired. The resulting error graph is shown in Fig. 14, indicating an average error of 0.0214 mm for the L1 distance and an average error of 0.0233 mm for the L2 distance.

(a) Cross pole; (b) Three-dimensional coordinates and encoded values of points on the cross pole.

The results of left and right image coding point detection.

Calibration error.
Calibration parameters results.
| Intrinsic and distortion parameter | External parameter |
|---|---|
|
| |
|
|
|
|
|
|
|
|
The calibration procedure for the multi-line laser system and the tracking ball requires the establishment of a transformation relationship between their coordinate systems. Before starting the joint calibration, it is essential to determine the three-dimensional coordinates of all reflective markers adorning the tracking ball cage and to mark these points on its surface. The tracking ball cage is located on a planar surface encircled by coded marking points, as shown in Fig. 15. A single lens reflex camera is used to capture multi-faceted images from different angles surrounding the tracking ball, and photogrammetry algorithms are used to derive three-dimensional coordinates for each marker point. Fig. 16(b) shows the reconstructed three-dimensional coordinates obtained by tracing all reflected markers on the ball cage.

The images of the ball cage were captured from various perspectives.

(a) Effect of coding point recognition, (b) 3D coordinates of all reflecting points on the ball cage.
After acquiring the 3D coordinates of all reflective points on the tracking ball cage, the coordinate origin is set to a reflective marker point. The alignment of the binocular tracking system, the multi-line laser system and the cross pole within the same field of view is of utmost importance during the joint calibration process of the multi-line laser system and the tracking ball cage, as shown in Fig. 17. First, the multi-line laser system captures an image of the cross pole to determine its three-dimensional coordinates. A transformation matrix is then calculated from the cross pole coordinate system to the multi-line laser system. The binocular tracking system then captures an image of the cross pole and calculates its corresponding three-dimensional coordinates in its own coordinate system, establishing a conversion relationship between them. At the same time, the images captured by the binocular tracking systems can be sued to establish a correlation between the ball cage and the tracking systems.

Multi-line laser and tracking ball cage joint calibration.
The 3D points Pc(Xi, Yi, Zi) of the cross pole in its own coordinate system as well as the reconstructed points Pm(Xi, Yi, Zi), Pb(Xi, Yi, Zi), are shown in Fig. 18 with both the multi-line laser system and the binocular tracking system. The conversion matrix

The 3D data and the transformation relationship of the cross pole within the multi-line laser system and the binocular tracking system.

The conversion relationship between the tracking ball cage and the binocular tracking system.
The intermediate and final results of joint calibration.
|
| |
|
|
|
| ⋯ | ⋯ |
|
|
|
|
| |
The iterative conjugate matrix optimization method proposed in this paper, in combination with traditional planar target calibration methods, enables a comparative analysis of calibration accuracy and 3D reconstruction accuracy after joint calibration. The calibration accuracy comparison is achieved by reconstructing the same reference target and evaluating the errors. The 3D reconstruction accuracy comparison is achieved by reconstructing standard ball and analyzing the ball center distance and diameter errors.
As shown in Fig. 20, two different joint calibration methods were used to complete the calibration of multi-line lasers to the tracking ball cage. Then, based on the calibration results, a binocular stereo reconstruction method was used to perform 3D reconstruction of the circle centers on the cross pole. The hardware parameters and environmental parameters for calibration and reconstruction are listed in Table 4. The cross pole was placed at different positions within the field of view, and the encoded circle centers were extracted to reconstruct their 3D coordinates. Finally, the coordinates were transformed into the binocular tracking coordinate system using the joint calibration results according to (33) and the post-transformation errors were calculated using (35). A total of 16 datasets were collected, as shown in Fig. 21. The average error of the proposed method for these 16 datasets was 0.0214 and the standard deviation was 0.0064, while the traditional plate-based calibration error was 0.2324 and the standard deviation was 0.1104. This shows that the proposed algorithm significantly improves calibration accuracy.

Schematic diagram of the joint calibration comparison, (a) Joint calibration based on a flat plate; (b) Joint calibration based on a cross pole.

Comparison of joint calibration accuracy.
Comparison test parameters for joint calibration.
| The method in this paper | Traditional methods | |
| Calibrated object | Cross pole | Planar calibration board |
| Field of view | 3000 mm × 4000 mm | |
| Depth of field range | 800 mm × 1000 mm | |
| Object test | Cross pole | |
| Camera – resolution | 2448 × 2048 | |
| Camera – focal length | 8 mm | |
| Calibration results |
|
|
The accurate conversion of the 3D data generated by the multi-line laser system into the binocular tracking system is a crucial step in successfully stitching multi-view 3D data from the laser system. Throughout this process, we adhere to scientific and precise operating principles to ensure the accuracy and effectiveness of the data conversion. To ensure the overall system’s precision, a reconstructed standard ball is used as a verification tool in this article. By comparing the multi-position joint optimization method using cross-posts with traditional plane target-based methods, we aim to demonstrate the superiority of our proposed algorithm in a rigorous and robust analytical framework. As shown in Fig. 22, the exact distance between the two ball centers is Di = 300.0015 mm, while the diameters of balls A and B are DA = 60.0024 mm and DB = 60.0031 mm, respectively. Positioned within the test field of view is the three-dimensional reconstruction created through a multi-line laser tracking system, as shown in Fig. 23. By using the calibration method presented in this article and then comparing the reconstructed results with those obtained using conventional planar target calibration methods, the performance of both methods is evaluated during the reconstruction process. After performing numerous precise scans, the point cloud data for the standard ball was obtained. Based on this data, we apply a scientific approach to three-dimensional coordinate fitting to determine the precise location of the spherical center. We then calculate the distance between the centers of the two balls and their diameters. We then compare these measurements with the preset standard values to determine the final accuracy. Table 5 and Fig. 24 provide a detailed breakdown of the final accuracy statistics.

Standard ball.

Standard ball reconstruction process based on the tracking system.

Error plots for both methods.
Comparison of precision between the calibration algorithm proposed in this article and the traditional flat-plate calibration algorithm.
| Number | Calibration algorithm of this paper (Method 1) | Calibration algorithm of traditional plate calibration (Method 2) | ||||
|---|---|---|---|---|---|---|
| Di | DA | DB | Di | DA | DB | |
| 1 | 300.0456 | 60.0247 | 60.0182 | 300.1547 | 60.0628 | 60.0978 |
| 2 | 300.0578 | 60.0341 | 60.0138 | 300.1236 | 60.1174 | 60.1147 |
| 3 | 300.0372 | 60.0297 | 60.0142 | 300.0412 | 60.1524 | 60.1340 |
| 4 | 300.0424 | 60.0225 | 60.0150 | 300.0824 | 60.0925 | 60.0941 |
| 5 | 300.0447 | 60.0279 | 60.0118 | 300.1471 | 60.0817 | 60.1119 |
| 6 | 300.0481 | 60.0248 | 60.0171 | 300.0758 | 60.1247 | 60.1477 |
| 7 | 300.0525 | 60.0121 | 60.0121 | 300.110 | 60.0852 | 60.0701 |
| 8 | 300.0473 | 60.0195 | 60.0177 | 300.0914 | 60.1264 | 60.1517 |
| Mean error | 0.0454 | 0.0220 | 0.0306 | 0.1593 | 0.1030 | 0.1121 |
Note: (Distance of ball center Di = 300.0015 mm, Diameter of DA = 60.0024, Diameter of DB = 60.0031), ±0.0002 mm
A detailed examination of the test data presented in Table 5, which shows the center distance of the standard ball and the diameters of the two balls, reveals that the calibration method described in this article has considerable precision. The statistical results show that the average accuracy of the center distance and diameters obtained with this method is 0.045 mm, 0.022 mm, and 0.0306 mm, respectively. The standard deviations are 0.0062, 0.0067 and 0.0074, respectively. These figures compare well with the traditional plate calibration method, which achieves a final precision of 0.1593 mm, 0.1030 mm, and 0.1121 mm. The standard deviations are 0.0471, 0.0296, and 0.0281, respectively. The proposed calibration method significantly improves the precision of the 3D reconstruction. To achieve this high level of precision, a user-friendly calibration image acquisition procedure was developed. By using a cross pole, calibration images can be captured efficiently at different positions. This method not only streamlines the calibration process, but also significantly increases its precision. Using the accurate calibration results, a binocular tracking system is used to perform 3D reconstruction of multiple objects. Throughout the comprehensive 3D reconstruction process, the calibration results are used to ensure the precision of the reconstructed data. Through the binocular tracking system, every little detail of the objects is successfully captured, resulting in high-precision 3D reconstructions.
Based on the joint calibration results, the tracking system was used to perform 3D reconstruction of objects in a test field of 1.5 m×2 m. Two automotive components and a high-voltage cable were separately subjected to 3D reconstruction, which provided corresponding reconstructed point cloud data. A segment of the reconstruction results can be seen in Fig. 25. It can be seen from the figure that the reconstructed objects have a realistic shape and high level of detail, which underlines the superiority of the proposed calibration method and 3D reconstruction technology.

Other objects are reconstructed based on a multi-line laser tracking system.
In this paper, a 3D reconstruction method based on binocular tracking is presented to facilitate the stitching of unmarked points in multi-line laser technology. The proposed methodology integrates the binocular tracking system with the multi-line laser system to achieve three-dimensional reconstruction. To ensure accurate and efficient joint calibration of the multi-line laser system and the tracking ball cage, a calibration methodology based on the iterative optimization of the conjugate gradient is presented in this study. First, a precise calibration of the internal and external parameters of the binocular tracking system is performed, followed by the calculation of the positional and pose conversion relationship between the binocular tracking system and the cross pole, as well as between the multi-line laser system and the cross pole. The transformation matrix between the multi-line laser system and the tracking ball cage is determined based on positional and pose data acquired at different positions. Thus, high-precision joint calibration is achieved by an iterative optimization approach based on the conjugate gradient method. Compared to the traditional planar calibration plate technology, the proposed method significantly improves the calibration accuracy while effectively optimizing the overall reconstruction accuracy. Experimental results indicate that the reconstruction accuracy of conventional joint calibration methods is about 0.1 mm, while the method proposed here can increase it to about 0.02 mm. The approach described in this paper provides robust technical support for the splicing of unmarked points while accurately reconstructing objects in three dimensions. Moreover, the proposed joint calibration technique not only improves the calibration accuracy but also streamlines the process, thus increasing the overall usability of the system. The aforementioned advantages make the method highly promising for widespread application in the fields of industrial automation and machine vision.