Abstract:"This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible." Driver's interaction with a vehicle via automatic gesture recognition is expected to enhance driving safety by decreasing driver's distraction. Optical and infrared-based gesture recognition systems are limited by occlusions, poor lighting, and varying thermal conditions and, therefore, have limited performance in practical in-cabin applications. Radars are insensitive to lighting or thermal conditions and, therefore, are more suitable for in-cabin applications. However, the spatial resolution of conventional radars is insufficient for accurate gesture recognition. The main objective of this research is to derive an accurate gesture recognition approach using low-resolution radars with deep learning-based super-resolution processing. The main idea is to reconstruct high-resolution information from the radar's low-resolution measurements. The major challenge is the derivation of the real-time processing approach. The proposed approach combines conventional signal processing and deep learning methods. The radar echoes are arranged in 3D data cubes and processed using a super-resolution model to enhance range and Doppler resolution. The FFT is used to generate the range-Doppler maps, which enter the deep neural network for efficient gesture recognition. The preliminary results demonstrated the proposed approach's efficiency in achieving high gesture recognition performance using conventional low-resolution radars.
Abstract:Vision-based autonomous driving requires reliable and efficient object detection. This work proposes a DiffusionDet-based framework that exploits data fusion from the monocular camera and depth sensor to provide the RGB and depth (RGB-D) data. Within this framework, ground truth bounding boxes are randomly reshaped as part of the training phase, allowing the model to learn the reverse diffusion process of noise addition. The system methodically enhances a randomly generated set of boxes at the inference stage, guiding them toward accurate final detections. By integrating the textural and color features from RGB images with the spatial depth information from the LiDAR sensors, the proposed framework employs a feature fusion that substantially enhances object detection of automotive targets. The $2.3$ AP gain in detecting automotive targets is achieved through comprehensive experiments using the KITTI dataset. Specifically, the improved performance of the proposed approach in detecting small objects is demonstrated.
Abstract:This paper deals with the multiple annotation problem in medical application of cancer detection in digital images. The main assumption is that though images are labeled by many experts, the number of images read by the same expert is not large. Thus differing with the existing work on modeling each expert and ground truth simultaneously, the multi annotation information is used in a soft manner. The multiple labels from different experts are used to estimate the probability of the findings to be marked as malignant. The learning algorithm minimizes the Kullback Leibler (KL) divergence between the modeled probabilities and desired ones constraining the model to be compact. The probabilities are modeled by logit regression and multiple instance learning concept is used by us. Experiments on a real-life computer aided diagnosis (CAD) problem for CXR CAD lung cancer detection demonstrate that the proposed algorithm leads to similar results as learning with a binary RVMMIL classifier or a mixture of binary RVMMIL models per annotator. However, this model achieves a smaller complexity and is more preferable in practice.