Abstract:With the increasing safety validation requirements for the release of a self-driving car, alternative approaches, such as simulation-based testing, are emerging in addition to conventional real-world testing. In order to rely on virtual tests the employed sensor models have to be validated. For this reason, it is necessary to quantify the discrepancy between simulation and reality in order to determine whether a certain fidelity is sufficient for a desired intended use. There exists no sound method to measure this simulation-to-reality gap of radar perception for autonomous driving. We address this problem by introducing a multi-layered evaluation approach, which consists of a combination of an explicit and an implicit sensor model evaluation. The former directly evaluates the realism of the synthetically generated sensor data, while the latter refers to an evaluation of a downstream target application. In order to demonstrate the method, we evaluated the fidelity of three typical radar model types (ideal, data-driven, ray tracing-based) and their applicability for virtually testing radar-based multi-object tracking. We have shown the effectiveness of the proposed approach in terms of providing an in-depth sensor model assessment that renders existing disparities visible and enables a realistic estimation of the overall model fidelity across different scenarios.
Abstract:The usage of environment sensor models for virtual testing is a promising approach to reduce the testing effort of autonomous driving. However, in order to deduce any statements regarding the performance of an autonomous driving function based on simulation, the sensor model has to be validated to determine the discrepancy between the synthetic and real sensor data. Since a certain degree of divergence can be assumed to exist, the sufficient level of fidelity must be determined, which poses a major challenge. In particular, a method for quantifying the fidelity of a sensor model does not exist and the problem of defining an appropriate metric remains. In this work, we train a neural network to distinguish real and simulated radar sensor data with the purpose of learning the latent features of real radar point clouds. Furthermore, we propose the classifier's confidence score for the `real radar point cloud' class as a metric to determine the degree of fidelity of synthetically generated radar data. The presented approach is evaluated and it can be demonstrated that the proposed deep evaluation metric outperforms conventional metrics in terms of its capability to identify characteristic differences between real and simulated radar data.
Abstract:Simulation-based testing is a promising approach to significantly reduce the validation effort of automated driving functions. Realistic models of environment perception sensors such as camera, radar and lidar play a key role in this testing strategy. A generally accepted method to validate these sensor models does not yet exist. Particularly radar has traditionally been one of the most difficult sensors to model. Although promising as an alternative to real test drives, virtual tests are time-consuming due to the fact that they simulate the entire radar system in detail, using computation-intensive simulation techniques to approximate the propagation of electromagnetic waves. In this paper, we introduce a sensitivity analysis approach for developing and evaluating a radar simulation, with the objective to identify the parameters with the greatest impact regarding the system under test. A modular radar system simulation is presented and parameterized to conduct a sensitivity analysis in order to evaluate a spatial clustering algorithm as the system under test, while comparing the output from the radar model to real driving measurements to ensure a realistic model behavior. The presented approach is evaluated and it is demonstrated that with this approach results from different situations can be traced back to the contribution of the individual sub-modules of the radar simulation.