Abstract:Amari's Dynamic Neural Field (DNF) framework provides a brain-inspired approach to modeling the average activation of neuronal groups. Leveraging a single field, DNF has become a promising foundation for low-energy looming perception module in robotic applications. However, the previous DNF methods face significant challenges in detecting incoherent or inconsistent looming features--conditions commonly encountered in real-world scenarios, such as collision detection in rainy weather. Insights from the visual systems of fruit flies and locusts reveal encoding ON/OFF visual contrast plays a critical role in enhancing looming selectivity. Additionally, lateral excitation mechanism potentially refines the responses of loom-sensitive neurons to both coherent and incoherent stimuli. Together, these offer valuable guidance for improving looming perception models. Building on these biological evidence, we extend the previous single-field DNF framework by incorporating the modeling of ON/OFF visual contrast, each governed by a dedicated DNF. Lateral excitation within each ON/OFF-contrast field is formulated using a normalized Gaussian kernel, and their outputs are integrated in the Summation field to generate collision alerts. Experimental evaluations show that the proposed model effectively addresses incoherent looming detection challenges and significantly outperforms state-of-the-art locust-inspired models. It demonstrates robust performance across diverse stimuli, including synthetic rain effects, underscoring its potential for reliable looming perception in complex, noisy environments with inconsistent visual cues.
Abstract:Compared to human vision, insect visual systems excel at rapid and precise collision detection, despite relying on only tens of thousands of neurons organized through a few neuropils. This efficiency makes them an attractive model system for developing artificial collision-detecting systems. Specifically, researchers have identified collision-selective neurons in the locust's optic lobe, called lobula giant movement detectors (LGMDs), which respond specifically to approaching objects. Research upon LGMD neurons began in the early 1970s. Initially, due to their large size, these neurons were identified as motion detectors, but their role as looming detectors was recognized over time. Since then, progress in neuroscience, computational modeling of LGMD's visual neural circuits, and LGMD-based robotics has advanced in tandem, each field supporting and driving the others. Today, with a deeper understanding of LGMD neurons, LGMD-based models have significantly improved collision-free navigation in mobile robots including ground and aerial robots. This review highlights recent developments in LGMD research from the perspectives of neuroscience, computational modeling, and robotics. It emphasizes a biologically plausible research paradigm, where insights from neuroscience inform real-world applications, which would in turn validate and advance neuroscience. With strong support from extensive research and growing application demand, this paradigm has reached a mature stage and demonstrates versatility across different areas of neuroscience research, thereby enhancing our understanding of the interconnections between neuroscience, computational modeling, and robotics. Furthermore, other motion-sensitive neurons have also shown promising potential for adopting this research paradigm.