Autopilot 3D Laser Point Cloud Object Detection

Date:

2022-01-20

Share:

Autopilot 3D Laser Point Cloud Object Detection

Key technologies of automobile include environment awareness, precise positioning, decision-making and planning, control and execution, high-precision map and vehicle networking V2X, automobile testing and validation technology and so on.


Among them, technologies such as environment awareness and accurate positioning are highly dependent on spatial data.


Vector and image maps are the main rather than of traditional two-dimensional spatial data. With the development of the times, especially the emergence of new technologies represented by automatic driving, traditional spatial data representation can no longer meet the needs of technological change. New generation of sensors are recording higher resolution and more accurate sensing results, and then the point cloud data emerges.


1. What Is Point Cloud Data


Point cloud data is a dense collection of points collected directly or indirectly by certain measuring means, which can describe the surface characteristics of the target according to the measuring rules. It is the third kind of spatial data after vector and image, and provides the most direct and effective expression for depicting the three-dimensional real world.


At present, the laser point cloud is the most representative three-dimensional data, and it is also a common data type in the field of auto-driving.


In the automobile environment awareness module, all kinds of sensors constitute the core hardware system, especially the vehicle lidar.


The working principle of lidar is very similar to that of radar. Pulsed laser emitted by the laser, which acts as a signal source, projects onto vehicles, pedestrians or other buildings on the road, causing scattering, and part of the light wave is reflected to the lidar receiver.


Based on the principle of laser ranging, the distance from the lidar to the target point can be obtained. Pulsed laser continuously scans the target, and all target points on the target can be obtained. After image processing with point cloud data, accurate three-dimensional image can be obtained.


2. Contents contained in point cloud data


Point cloud data contains rich information such as latitude and longitude coordinates, intensity, multiple echoes, and color of each point.


Taking the automobile lidar as an example, we briefly analyze the information contained in the collected laser point cloud data:


1.x. y. z coordinate information


According to the collected x. y. z coordinates, the three-dimensional structure information of the measured object can be obtained directly, and the three-dimensional structure information is the carrier of other geographic information.


2. Number of echos


The number of echos is the total number of echos for a given pulse. Laser pulses emitted from a lidar system are reflected from the surface and objects on the surface including cars, pedestrians, bridges, and so on.


A laser pulse emitted may return to the lidar sensor in the form of one or more echos. When any emitted laser pulse travels to the ground, it will be split into as many echoes as the reflected surface if it encounters multiple reflective surfaces.


3. Intensity information


Intensity is a measure (collected for each point) of the echo intensity of the lidar pulse that generated the point. Different objects have different levels of lidar reflection, so objects can be distinguished by intensity signals.


4. Categories


Each processed lidar point can have a classification that defines the type of reflected lidar pulse object. Lidar points can be classified into several categories including automobiles, pavement, and so on.


5.RGB


RGB (red, green, blue) bands can be used as attributes of lidar data. This attribute is usually derived from images taken during lidar measurements.


6. Scan direction


The scanning direction is the direction of the laser scanner when the laser pulse is emitted outward.


3. Point cloud data and auto-driving


Auto-driving has a broad market application prospect, which promotes the research of various environmental sensors. In the field of artificial intelligence auto-driving, accurate environmental awareness and accurate positioning are key to reliable navigation, information decision-making and safe driving of automobile in complex dynamic environment.


At present, the application of 3D laser point cloud data in the field of automotive driving can be divided into two aspects:


1.Real-time environmental awareness and processing based on scene understanding and target detection.


A real-time 3D model of the car's surroundings can be obtained by car lidar scanning. Comparing the changes of the environment in the previous and next frame with related algorithms, the surrounding vehicles and pedestrians can be identified, the obstacles can be avoided automatically, and the safety of auto-driving can be improved.


2. SLAM Enhances Positioning


In unknown environments, auto-driving vehicles cannot continuously correct for accurate positioning based on known maps. They can only obtain environmental information through their own sensors, and extract valid information through signal processing to build an environment map.


The synchronous mapping (SLAM) provided by lidar enables the global map to be obtained in real time, and the accurate navigation and vehicle positioning can be achieved by comparing the features in the high-precision map.


4. Case study of laser point cloud labeling.


In the field of data labeling, common 3D laser point cloud labeling types usually include single-frame labeling, continuous-frame labeling, 2D-3D fused labeling, and point cloud semantic segmentation of panoramic images.


We take single- object detection as an example to show in detail the autopilot laser point cloud labeling case.


1.Label content:


Use vehicle lidar to collect data, and then divide the collected data into several sections. Select a frame from each segment per second and use the 3D cubic box to label the vehicle objects within the frame.


2. Labeling requirements:


1) Objects with less than 10 point clouds do not need to be labeled.


2) The boundary of the 3D cubic box needs to fit the scanned point cloud by 3D cubic box labeling.


3) A 3D cubic box labels only one object and cannot label the same target repeatedly.


4) Nesting cannot occur between two 3D cubic boxes.


5) A labeled object is obscured into two parts, which share a 3D cubic box label.


6) For partially obscured labeled objects, rationally complement the bounding-box according to specific conditions.