http://bit.ly/neuro-controller .
Flying insects are capable of vision-based navigation in cluttered environments, reliably avoiding obstacles through fast and agile maneuvers, while being very efficient in the processing of visual stimuli. Meanwhile, autonomous micro air vehicles still lag far behind their biological counterparts, displaying inferior performance with a much higher energy consumption. In light of this, we want to mimic flying insects in terms of their processing capabilities, and consequently apply gained knowledge to a maneuver of relevance. This letter does so through evolving spiking neural networks for controlling landings of micro air vehicles using the divergence of the optical flow field of a downward-looking camera. We demonstrate that the resulting neuromorphic controllers transfer robustly from a highly abstracted simulation to the real world, performing fast and safe landings while keeping network spike rate minimal. Furthermore, we provide insight into the resources required for successfully solving the problem of divergence-based landing, showing that high-resolution control can potentially be learned with only a single spiking neuron. To the best of our knowledge, this is the first work integrating spiking neural networks in the control loop of a real-world flying robot. Videos of the experiments can be found at