Grains such as rice, wheat, corn and other strategic commodities are stored in large warehouses by the national agencies, which is an important means for people's resilience to disasters and social stability. Quality control before grain storage includes setting up sampling points on trucks, using sampling rods to extract grains for on-site inspection, and finally deciding whether to store the grains based on the inspection results. The way in which the sampling points are set up is an important link in grain quality control. Traditional manual sampling is inefficient and requires manual labor, which is not conducive to monitoring. Automated sampling point generation has the advantages of high efficiency and unmanned operation [1], but due to the diversity of truck types and the variability of parking positions, it is not easy to determine the boundaries of the compartments [2], which makes the generation of sampling points very difficult [3], [4]. It is necessary to solve the problems of automatic compartment localization and calculation of compartment dimensions before determining the sampling points.
Unlike the usual single-point localization by SLAM [6], [7] and GPS [8], [9], compartment localization requires a precise measurement of the compartment's orientation and dimensions. Common localization methods include 2D imaging [11], [12], [13], 3D laser scanning [14], [15], [16], [17], and rotating 3D laser scanning [18], [19]. Monocular visual localization requires pre-calibration of camera and robotic arm poses, followed by feature extraction and localization. Due to the uncertainty of the monocular visual scale, precise localization and measurement are challenging. Ai et al. [10] achieved vehicle segmentation by extracting vehicles from 2D images and establishing a transformation relationship between 2D images and 3D LIDAR point clouds. Jurado-Rodríguez et al. [11] used multi-angle 2D images captured by drones to reconstruct 3D models and applied convolutional neural networks for vehicle part segmentation. The use of corresponding relationships between images and 3D models enabled compartment recognition. Compared to monocular measurement, stereo vision offers higher accuracy due to a known scale factor. Mrovlje et al. [12] used white PVC boards as features and implemented a stereo measurement method to localize trucks in ports for automated cargo loading and unloading. Li et al. [13] used drones equipped with stereo imaging systems for truck localization and measurement.
Stereo or multi-camera systems require overlapping fields of view for 3D reconstruction, while laser scanning effectively overcomes the need for overlapping views and complements stereo or multi-camera systems. Point cloud data acquisition is achieved using principles such as dTOF and iTOF [20]. Ren et al. [5] used single-line laser scanning for truck localization and measurement, enabling automated cement loading. Similarly, Yu et al. [4] used single-line laser scanning for automated truck detection and palletizing planning to project point clouds in multiple directions and fit lines, achieving compartment localization and segmentation. Projection methods use dimensionality reduction to project truck point clouds onto a plane, enabling more robust processing. However, the accuracy of line fitting varies greatly with noisy point clouds in projection-based methods, making them unsuitable for compartment localization and segmentation in open environments. Fan et al. [14] used 16-line laser scanning for vehicle scanning and density clustering with the DBSCAN algorithm for vehicle segmentation. This method adapts to varying densities in 3D laser point clouds, but struggles with the random density variations caused by material and viewing angle effects in truck scanning point clouds. While 3D laser scanners are cost-effective and offer high scanning accuracy, conventional mechanical scanners are limited by their field of view, resulting in sparse point cloud data acquisition. To extend the field of view, a motor was added to drive the rotation of the scanner [18], [19]. Thus, a vertical field of view of 360° was achieved, which significantly expands the scanning range and avoids sparse point clouds and visual blind spots.
To identify the boundaries of train compartments, 2D images are created by point cloud projection [2], where the boundary is exacted by the Canny operator, which leads to inaccurate edge extraction due to the influence of the point cloud thickness. A method for ship structure modeling based on the point cloud projection is proposed [21], in which the edges of the truck are exacted by line fitting with higher accuracy.
Based on the aforementioned, laser scanning methods offer advantages over imaging methods in vehicle localization and measurement, such as the elimination of auxiliary features like white panels or overlapping views in stereovision. Active laser emission is better suited for outdoor vehicle localization and measurement. Therefore, a localization and measurement method based on laser scanning is developed in this article. To extend the scanning range, we use rotating laser scanners for compartment scanning so that a single device can be used to scan in width and height.
Common methods for generating sampling points include manual selection and automatic selection [5]. This paper focuses on the method of automatic calculation of sampling points, which solves the problems of compartment localization and measurement. Compartment localization refers to determining the positions of the rear and side panel corners of the compartment and generating grain sampling points based on these corner positions. First, a rotating LIDAR is used to detect the grain truck, collect environmental point clouds, and remove non-vehicle point clouds using pre-defined region of interest (ROI) clipping frames. Then a method for projecting point clouds is proposed. Based on the ground calibration results, the point cloud is transformed into a two-dimensional point cloud in the XOY plane parallel to the radar coordinate system. Based on the dimensionality reduction idea, the point cloud is projected onto the XOY plane. Next, the rear and side panels of the compartment are segmented and a region growing method for corner point generation is proposed. Based on the corner points, the compartment position is determined and the sampling boundaries are calculated. Finally, a certain number of sampling points are randomly generated within the sampling boundaries.
The innovations of this paper are as follows:
- •
A method for automatically generating grain truck sampling points is proposed. After collecting large-scale point cloud data with a rotating laser radar, compartment corner points are located to construct sampling boundaries. Within these boundaries, a certain number of sampling points are randomly generated by a sampling machine. This method has the advantages of high efficiency and unmanned operation and fulfills the requirements for unmanned grain testing.
- •
A projection-based method for grain truck point cloud processing is proposed. This method reduces three-dimensional point clouds to two-dimensional point clouds within the XOY plane, which greatly reduces the computational complexity and avoids the instability problems that may arise from directly fitting planes in three-dimensional space.
- •
A corner point extraction method for compartments based on region growth is proposed. First, the side and rear panels of the compartment are segmented, and the line equation of the projected point cloud of the compartment panels is fitted using a global least squares method. Two-dimensional grids are divided along the line and each grid is expanded based on the density variation rate of the point clouds. When the density variation rate exceeds a set value, the expansion is terminated and the current position is identified as a compartment corner point. This method partially solves the problem of positioning failure due to density variations in laser radar scanning point clouds.
Our automatic sampling system for grain trucks is shown in Fig. 1. The point cloud scanned by a rotating laser scanner is shown in Fig. 2. The points are projected into the ground plane, the lines are fitted for side and the rear point cloud, and corner B is extracted as the intersection point of two lines, the other two corners, A and C, are exacted based on density variation rate alone lines.

Automatic sampling system.

Principle of measurement.
The static modeling equipment is based on a rotating laser scanner (see Fig. 3), which is mainly composed of a multi-line laser scanner, a motor (including an encoder), an embedded computer, a voltage stabilization module, and other components. The static modeling equipment is securely installed on the cloud rail pillar at the sampling site. By controlling the continuous rotation of the motor, the equipment can capture the entire point cloud of the trucks in the sampling area.

Rotating laser scanner.
The sampling system uses a single radar for grain truck localization and measurement. To obtain a wide field of view, the rotating laser scanner is mounted on the pillar of the sampling equipment, see Fig. 4. Due to single-sided scanning, the opposite side of the grain truck cannot be scanned due to occlusions. However, given the symmetrical shape of the grain truck, scanning one side is sufficient for localizing and measuring the length and width. We have set the motor speed to 1.5 rpm, which means that the motor needs 40 s for one full rotation. Since the radar scans symmetrically, half a rotation of the motor takes 20 s to scan the entire scene. However, to capture denser point clouds, we still scan for 40 s. Apart from the occluded area, our rotating radar scan covers the entire spherical region centered on the installation position, so that any grain truck within the specified area can be scanned.

Installation diagram of the rotating laser scanner.
LIDAR can scan scenes ranging from a few dozen to hundreds of meters, capturing point clouds containing trucks, roads, buildings, vegetation, and various machines. To simplify point cloud segmentation, in this paper a rectangle is delineated under the sampling robot for the truck positioning. Before vehicle localization, four corner points are manually selected in the scanned point cloud. These four points serve as the corner points of the rectangle used for segmenting the truck point cloud. The trucks are segmented by polygonal clipping of the point cloud.
To mitigate the influence of ground points and scatter points between vehicles and the ground on corner point localization, a ground height (Z-axis) offsetting method is used in this work to remove ground points and retain only the point cloud data above.
Due to the non-parallelism between the ground and the XOY plane of the rotating LIDAR, we must first rectify the horizontal plane of the truck to facilitate the subsequent point cloud projection and robust extraction of the corner points of the truck compartment. This rectification process is completed before positioning. To obtain the transformation relationship more robustly, we manually segment the ground point cloud and perform Gaussian filtering to obtain smooth point cloud data. Then, we calculate the centroid (x̅, y̅, z̅) of the plane point cloud and compute the covariance matrix M3D based on the centroid, as shown in (1) and (2).
Perform the singular value decomposition (SVD) of the covariance matrix M3D to obtain eigenvalues and eigenvectors. The eigenvector corresponding to the smallest eigenvalue is set as the normal vector n0(nx,ny,nz) of the ground. The normal vector should be normalized to unit length.
Our goal is to align the scanned ground point cloud parallel to the XOY plane of the LIDAR coordinate system and calculate the cross product ν of the normal vector and the unit vector n1 (0,0,1) in the positive direction of the Z-axis, as shown in (3).
Using the Rodrigues' rotation formula, we can derive the rotation matrix R, which corresponds to the cross product ν. Then we rotate the original point cloud to obtain the corrected point cloud. Finally, we project all corrected point clouds along the vertical direction onto the XOY plane, thus reducing the 3D point cloud to a 2D point cloud.
To fit the point clouds of the rear and side panels of the carriage into straight lines, we extract the scanned boundary point cloud of the carriage. Since the clipped point cloud usually contains no outliers, we determine the orientation of the LIDAR relative to the parking area using {sid, bot}, where ‘sid’ indicates whether the grain truck was scanned from the left or right side, and 'bot' indicates whether the rear panel of the carriage is at the farthest or closest end in the Y-axis direction. Based on {sid, bot}, we construct straight lines l1 and l2 that are parallel to the X-axis and the Y-axis (see Fig. 5). These lines represent the boundary regions of the carriage, which were calculated according to the nearest point principle.

Scanned truck carriage point cloud.
After obtaining the reference lines, all points within a distance threshold d from the line are extracted as input data for fitting the line.
In this study, the line fitting method based on RANSAC is used. First, a certain number of points are randomly selected from the input data for line fitting. Then, a line is fitted based on the least squares method with the selected random points. In order to mitigate the degeneration problem when fitting lines perpendicular to the X-axis, the centroid (x̅, y̅) of the point cloud is calculated together with the covariance matrix M2D as shown in (4).
The direction vector of the line is calculated as:
Therefore, the equation representing the fitted line can be expressed as shown in (5).
Multiple line fittings are performed based on the RANSAC method. The line with the highest number of inliers is selected as the final fitted line.
The first corner point is defined as the intersection of the fitted lines representing the projections of the rear panel and the side panel obtained from the scan data. The second and third corner points are located on the rear panel and the side panel, respectively. The calculation of the first corner point is simple. It involves calculating the intersection point based on the fitted projections of the rear panel and side panel point clouds, as described previously. The method for calculating the second and third corner points is similar.
Since the density of the point cloud decreases with increasing scanning distance, this paper proposes a corner point extraction method based on the density variation rate. To calculate the local density of the point cloud, we divide the space along the fitted line into rectangular grids with a grid size l, which is set to 0.3 m in this paper. The grid division method is shown in Fig. 6. Using the first corner point as a reference, we calculate the dot product of the vector constructed from the centroid of the fitted point cloud and the first corner point, with the direction vector of the fitted line. If the dot product is positive, we divide the grids in the direction of the direction vector, otherwise we divide the grids in the opposite direction of the direction vector.

Grid division.
Given a density variation rate threshold Δρ, we start by counting the number of points N in each grid of the initial grid. Then we calculate the point cloud density ρ using (6). If the density variation between adjacent grids exceeds Δρ, the centroid of the points in that grid is identified as a corner point. Due to the inherent uncertainty in local point cloud density variations, a more robust approach to corner point extraction is adopted in this work by considering the current grid density together with the densities of the subsequent m grids. In particular, we check whether the density variation rates of all these grids exceed Δρ to determine corner points.
Due to the scanning setup used in this study, only one side of the carriage can be scanned while the other side remains in a blind spot. Therefore, it is challenging to calculate the fourth corner point c4 using the previously described method. This corner point corresponds to the non-localized corner on the side panel within the blind spot. To solve this problem, the fourth corner point c4 is calculated based on the symmetry of the corner points of the carriage, as shown in (7):
Starting from the four corner points, the boundary lines of the rectangle are determined. A certain number of sampling points are randomly generated within these boundary lines. To ensure smooth sampling by the sampling device, this paper proposes offsetting the boundary lines inwards by a certain distance. In this paper, the offset is set to 0.2 m to ensure that the sampling machine can reliably extract grains.
The application of our measurement system is shown in Fig. 7. To validate our method, we scanned 10 grain trucks and tested the side line fitting error and corner localization deviation to illustrate the effectiveness of our algorithm. In particular, the corner localization deviation is very important to indicate that the measurement system is operating normally.

Application of the measurement system.
To ensure the normal operation of the system, the verification interval is set at six months. Employees regularly check whether the maximum error of corner localization is within 9.8 cm. A diagram of the traceability measurement is shown in Fig. 8.

Traceability measurement.
The scanned point cloud, see Fig. 9 (left), shows a good overall quality of the point cloud in the scene. Since a rectangular parking area has been designated in advance for the grain transport truck, where the truck can park freely, we can extract the rectangular region of interest in advance. To do this, we use a polygonal clipping method to remove point cloud data outside the polygon. The segmented container truck is depicted with the collected point cloud data being complete, see Fig. 9 (right). This data can be used for positioning and measurement purposes.

Original (left) and segmented (right) truck point cloud.
The original point cloud and the corresponding projected points are shown in Fig. 10. The projected points have prominent linear features representing the side panels and rear panel. The green lines in the projected points represent the fitted lines. The intersection point of these two fitted lines is defined as the first corner point of the carriage, with the other corner points positioned relative to this point. Due to the characteristics of the rotating LIDAR, which scans from top to bottom, the upper edge of the carriage's other side can also be scanned. However, due to its narrowness, the point cloud can be sparse in the distance. Therefore, it is more robust to rely on the rear panel and side panels of the carriage, which are closer to the LIDAR.

Original points of the grain truck (top) and corresponding projected points (bottom).
We tested the error of 20 sets of fitted lines using the Root Mean Square Error (RMSE) as the method for error calculation. The distribution of side line fitting errors is shown in Fig. 11. The average error value for all test sets is 6.7 cm. This indicates a high degree of accuracy in the line fitting process.

Side line fitting error distribution.
The results of corner localization for the grain truck are shown in Fig. 12, where the red points represent the localized corners. It can be seen that the proposed localization method accurately calculates the positions of the corner points.

Corner localization.
An indirect method was used to measure the localization accuracy. Ten trucks were tested (see Table 1). The length and width of the carriage were measured manually using a tape measure as reference dimensions. These measurements were then compared with the dimensions calculated using the localized corner points as measured values.
Dimensions of trucks.
| Truck ID | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|---|---|---|---|---|---|---|---|---|---|---|
| Width [cm] | 254.4 | 257.3 | 255.2 | 257.1 | 255.5 | 255.1 | 258.3 | 212.5 | 213.1 | 213.7 |
| Length [cm] | 1295.8 | 1301.9 | 1297.0 | 1298.6 | 1298.2 | 1295.3 | 1301.5 | 392.6 | 393.0 | 393.3 |
The deviation between the reference dimensions and the measured dimensions was used to evaluate the localization accuracy. As can be seen in Fig. 13, the accuracy of the width measurement is higher than that of the length measurement, as the LIDAR is installed closer to the rear of the carriage. The localization accuracy achieved is 9.8 cm, which is sufficient for the requirements of grain sampling. For applications that require higher localization accuracy, it is recommended to use a LIDAR with higher precision.

Localization deviation.
The accuracy of the corner extraction algorithm is better than 9.8 cm, which is sufficient to control the sampling robot and complete the grain quality inspection. The reason for this is as follows: To prevent the sampling robot from colliding with the truck, the sampling points are placed at a distance of 20 cm from the bounding box of the corners. Even with a maximum deviation of 9.8 cm, we can ensure that all sample points are inside the truck without collision, see Fig. 14. The accuracy of the corner extraction is sufficient.

Sample points are inside the carriage.
The effectiveness of the corner point localization approach proposed in this paper has been extensively tested in experiments. By defining a fixed clipping area, the grain carriage can be extracted quickly without the need for complex segmentation of the point cloud of the truck. Using the projection method to project 3D point clouds onto the ground plane, the rear and side panel point clouds of the carriage are extracted for 2D line fitting to perform the first corner point localization. This approach avoids the instability introduced by 3D plane fitting and results in a high line fitting accuracy with an average error of 4.7 cm. Finally, along the fitted lines, a method based on the region growth of the density variation rates of the grid point cloud is proposed to extract the second and third corner points. This method is characterized by high precision, simplicity, and adaptation to the uneven density variation of point clouds. Experimental results show a localization accuracy of up to 9.8 cm. If higher localization accuracy is required, it is recommended to replace the LIDAR with a more precise one while keeping the rotation mechanism unchanged. However, the proposed method also has its limitations:
The accuracy of corner point localization may decrease for vehicles with significant deformations, as the projected side or rear panel lines of the deformed vehicle may have poor linearity, which affects the accuracy of line fitting.
For trucks with small gaps between the front and carriage, there may not be a significant difference in point cloud density between the side panels and the front. This can cause the region growing method to incorrectly locate the corner points in the front part of the vehicle.
In this paper, we addressed the challenges posed by different grain truck models and variable parking positions in automated sampling, which leads to difficulties in carriage localization and inaccurate measurements. We proposed a method that incorporates fixed parking areas and robust grain truck point cloud extraction using polygonal clipping. We introduced a projection-based approach using line fitting to determine the first corner point of the carriage, which demonstrated high precision. This first corner point served as a reference for corner localization. Additionally, we proposed a region growing method based on density variation rates of point clouds to localize the second and third corner points. We used the prior knowledge of symmetry to determine the position of the fourth corner point. The achieved localization accuracy was up to 9.8 cm. Finally, we used the localized corner points to generate randomly sampled points within the carriage boundary.
Future research directions include exploring localization solutions by integrating LIDAR with 2D cameras and using image segmentation methods to extract the rear of the carriage, which is mapped onto the point cloud for localization. This could solve the problem of low localization accuracy for trucks with small gaps between the front and carriage. In addition, the application of point cloud deep learning methods for carriage corner point localization could be investigated.