Abstract:Drone racing is becoming a popular e-sport all over the world, and beating the best human drone race pilots has quickly become a new major challenge for artificial intelligence and robotics. In this paper, we propose a strategy for autonomous drone racing which is computationally more efficient than navigation methods like visual inertial odometry and simultaneous localization and mapping. This fast light-weight vision-based navigation algorithm estimates the position of the drone by fusing race gate detections with model dynamics predictions. Theoretical analysis and simulation results show the clear advantage compared to Kalman filtering when dealing with the relatively low frequency visual updates and occasional large outliers that occur in fast drone racing. Flight tests are performed on a tiny racing quadrotor named "Trashcan", which was equipped with a Jevois smart-camera for a total of 72g. The test track consists of 3 laps around a 4-gate racing track. The gates spaced 4 meters apart and can be displaced from their supposed position. An average speed of 2m/s is achieved while the maximum speed is 2.6m/s. To the best of our knowledge, this flying platform is the smallest autonomous racing drone in the world and is 6 times lighter than the existing lightest autonomous racing drone setup (420g), while still being one of the fastest autonomous racing drones in the world.