Abstract:High-performance autonomy often must operate at the boundaries of safety. When external agents are present in a system, the process of ensuring safety without sacrificing performance becomes extremely difficult. In this paper, we present an approach to stress test such systems based on the rapidly exploring random tree (RRT) algorithm. We propose to find faults in such systems through adversarial agent perturbations, where the behaviors of other agents in an otherwise fixed scenario are modified. This creates a large search space of possibilities, which we explore both randomly and with a focused strategy that runs RRT in a bounded projection of the observable states that we call the objective space. The approach is applied to generate tests for evaluating overtaking logic and path planning algorithms in autonomous racing, where the vehicles are driving at high speed in an adversarial environment. We evaluate several autonomous racing path planners, finding numerous collisions during overtake maneuvers in all planners. The focused RRT search finds several times more crashes than the random strategy, and, for certain planners, tens to hundreds of times more crashes in the second half of the track.