Abstract:In this paper, we adopt the Universal Manifold Embedding (UME) framework for the estimation of rigid transformations and extend it, so that it can accommodate scenarios involving partial overlap and differently sampled point clouds. UME is a methodology designed for mapping observations of the same object, related by rigid transformations, into a single low-dimensional linear subspace. This process yields a transformation-invariant representation of the observations, with its matrix form representation being covariant (i.e. equivariant) with the transformation. We extend the UME framework by introducing a UME-compatible feature extraction method augmented with a unique UME contrastive loss and a sampling equalizer. These components are integrated into a comprehensive and robust registration pipeline, named UMERegRobust. We propose the RotKITTI registration benchmark, specifically tailored to evaluate registration methods for scenarios involving large rotations. UMERegRobust achieves better than state-of-the-art performance on the KITTI benchmark, especially when strict precision of (1{\deg}, 10cm) is considered (with an average gain of +9%), and notably outperform SOTA methods on the RotKITTI benchmark (with +45% gain compared the most recent SOTA method).
Abstract:Object detection in radar imagery with neural networks shows great potential for improving autonomous driving. However, obtaining annotated datasets from real radar images, crucial for training these networks, is challenging, especially in scenarios with long-range detection and adverse weather and lighting conditions where radar performance excels. To address this challenge, we present RadSimReal, an innovative physical radar simulation capable of generating synthetic radar images with accompanying annotations for various radar types and environmental conditions, all without the need for real data collection. Remarkably, our findings demonstrate that training object detection models on RadSimReal data and subsequently evaluating them on real-world data produce performance levels comparable to models trained and tested on real data from the same dataset, and even achieves better performance when testing across different real datasets. RadSimReal offers advantages over other physical radar simulations that it does not necessitate knowledge of the radar design details, which are often not disclosed by radar suppliers, and has faster run-time. This innovative tool has the potential to advance the development of computer vision algorithms for radar-based autonomous driving applications.
Abstract:Automotive radars have an important role in autonomous driving systems. The main challenge in automotive radar detection is the radar's wide point spread function (PSF) in the angular domain that causes blurriness and clutter in the radar image. Numerous studies suggest employing an 'end-to-end' learning strategy using a Deep Neural Network (DNN) to directly detect objects from radar images. This approach implicitly addresses the PSF's impact on objects of interest. In this paper, we propose an alternative approach, which we term "Boosting Radar Reflections" (BoostRad). In BoostRad, a first DNN is trained to narrow the PSF for all the reflection points in the scene. The output of the first DNN is a boosted reflection image with higher resolution and reduced clutter, resulting in a sharper and cleaner image. Subsequently, a second DNN is employed to detect objects within the boosted reflection image. We develop a novel method for training the boosting DNN that incorporates domain knowledge of radar's PSF characteristics. BoostRad's performance is evaluated using the RADDet and CARRADA datasets, revealing its superiority over reference methods.