Abstract:Autonomous vehicles (AVs) have demonstrated significant potential in revolutionizing transportation, yet ensuring their safety and reliability remains a critical challenge, especially when exposed to dynamic and unpredictable environments. Real-world testing of an Autonomous Driving System (ADS) is both expensive and risky, making simulation-based testing a preferred approach. In this paper, we propose AVASTRA, a Reinforcement Learning (RL)-based approach to generate realistic critical scenarios for testing ADSs in simulation environments. To capture the complexity of driving scenarios, AVASTRA comprehensively represents the environment by both the internal states of an ADS under-test (e.g., the status of the ADS's core components, speed, or acceleration) and the external states of the surrounding factors in the simulation environment (e.g., weather, traffic flow, or road condition). AVASTRA trains the RL agent to effectively configure the simulation environment that places the AV in dangerous situations and potentially leads it to collisions. We introduce a diverse set of actions that allows the RL agent to systematically configure both environmental conditions and traffic participants. Additionally, based on established safety requirements, we enforce heuristic constraints to ensure the realism and relevance of the generated test scenarios. AVASTRA is evaluated on two popular simulation maps with four different road configurations. Our results show AVASTRA's ability to outperform the state-of-the-art approach by generating 30% to 115% more collision scenarios. Compared to the baseline based on Random Search, AVASTRA achieves up to 275% better performance. These results highlight the effectiveness of AVASTRA in enhancing the safety testing of AVs through realistic comprehensive critical scenario generation.
Abstract:Recently, the evolution of deep learning has promoted the application of machine learning (ML) to various systems. However, there are ML systems, such as autonomous vehicles, that cause critical damage when they misclassify. Conversely, there are ML-specific attacks called adversarial attacks based on the characteristics of ML systems. For example, one type of adversarial attack is an evasion attack, which uses minute perturbations called "adversarial examples" to intentionally misclassify classifiers. Therefore, it is necessary to analyze the risk of ML-specific attacks in introducing ML base systems. In this study, we propose a quantitative evaluation method for analyzing the risk of evasion attacks using attack trees. The proposed method consists of the extension of the conventional attack tree to analyze evasion attacks and the systematic construction method of the extension. In the extension of the conventional attack tree, we introduce ML and conventional attack nodes to represent various characteristics of evasion attacks. In the systematic construction process, we propose a procedure to construct the attack tree. The procedure consists of three steps: (1) organizing information about attack methods in the literature to a matrix, (2) identifying evasion attack scenarios from methods in the matrix, and (3) constructing the attack tree from the identified scenarios using a pattern. Finally, we conducted experiments on three ML image recognition systems to demonstrate the versatility and effectiveness of our proposed method.