RS-Reference 2.1 Can Efficiently Evaluate the Performance of the LiDAR and Multi-Sensor Fusion Sensing System
SHENZHEN, China–(BUSINESS WIRE)– RoboSense LiDAR (https://robosense.ai/) today released the latest version of the ground truth data system and evaluation tool chain RS-Reference 2.1, used for LiDARs and multi-sensor fusion systems performance evaluation. The original RS-Reference version was launched in the market in 2016, when the automotive-grade MEMS solid-state LiDAR RS-LiDAR-M1 project was established. Used by global OEMs and Tier1s, the system has been continuously improved and upgraded with more efficient and useful evaluation function modules and software tool chains.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20210209005689/en/
(Photo: Business Wire)
While the evaluation function modules can be liked to an exam, the ground truth data is the “answer” for the evaluation of the perception system. Therefore, the accuracy of ground truth data must be significantly higher than the device under test (DuT) in all aspects including detection performance and geometric error.
The ground truth data, usually stored in the PB-Level, includes dynamic information such as obstacle types, speeds, and locations, and static information such as lane lines and road boundaries.
Data labeling quality and data generation efficiency are the key factors to ground truth data.
- The RS-Reference system provides a set of ground truth data generation and evaluation solutions, and outputs detection performance and geometric error indicators with a labeling efficiency close to 1:1. This is significantly more accurate than real-time perception, manual labeling and traditional labeling tools.
- High performance and mature sensor data collection system: the RS-Reference system contains the RoboSense 128-beam LiDAR RS-Ruby, Leopard camera, Continental 408 millimeter-wave radar, GI-6695 RTK, and two added RoboSense RS-Bpearl LiDAR for near-field blind spots in the 2.1 version.
- Detached roof-mounted deployment without vehicle body modification: the RS-Reference system adapts to different vehicle sizes, does not occupy the sensor installation position of the DuTs, and directly evaluates the intelligent driving system that is consistent with the sensor sets of commercial vehicles.
- Vastly improved/accumulated perception algorithm and offline processing mechanism: The algorithm is key to smart labeling instead of manual labeling, and is responsible for the extraction of ground truth data. The RS-Reference system uses a customized and dedicated offline perception algorithm, which is the product of RoboSense’s 13+ years’ of accumulated experience of LiDAR sensing algorithm technology. It performs a “full life process tracking and identification” for each obstacle data, and extracts all ground truth data from each frame. The RS-Reference system can pick up speed and acceleration labeling, and accurately delineate the size of the labeling frame through comprehensive shape and size information. The system is also capable of accurately dividing obstacles that are in close proximity to each other in complex scenes.
- Full-stack evaluation tool chain: it includes data collection tools, sensor calibration tools, visualization tools, manual verification tools, evaluation tools, etc. The 2.1 version upgrades the data management platform and adds the scene semantic labeling function that serves every step of the evaluation process.
- Individual sensor evaluation in the multi-sensor fusion system: not only can the RS-Reference system evaluate the result of intelligent driving’s perception fusion, but it can also provide targeted solutions based on the features of different types of sensors such as LiDAR, millimeter wave radar, and camera. Dedicated or customized tool modules can be developed according to customer needs for further in-depth analysis of the performance of the sensing system.
- Extended application value of the RS-Reference: includes planning and control algorithm development support, which is able to generate massive ground truth data to build simulation scenes, and can evaluate road-side perception systems.