Abstract:This paper deals with the problem of clustering data returned by a radar sensor network that monitors a region where multiple moving targets are present. The network is formed by nodes with limited functionalities that transmit the estimates of target positions (after a detection) to a fusion center without any association between measurements and targets. To solve the problem at hand, we resort to model-based learning algorithms and instead of applying the plain maximum likelihood approach, due to the related computational requirements, we exploit the latent variable model coupled with the expectation-maximization algorithm. The devised estimation procedure returns posterior probabilities that are used to cluster the huge amount of data collected by the fusion center. Remarkably, we also consider challenging scenarios with an unknown number of targets and estimate it by means of the model order selection rules. The clustering performance of the proposed strategy is compared to that of conventional data-driven methods over synthetic data. The numerical examples point out that the herein proposed solutions can provide reliable clustering performance overcoming the considered competitors.
Abstract:The monitoring of diver health during emergency events is crucial to ensuring the safety of personnel. A non-invasive system continuously providing a measure of the respiration rate of individual divers is exceedingly beneficial in this context. The paper reports on the application of short-range radar to record the respiration rate of divers within hyperbaric lifeboat environments. Results demonstrate that the respiratory motion can be extracted from the radar return signal applying routine signal processing. Further, evidence is provided that the radar-based approach yields a more accurate measure of respiration rate than an audio signal from a headset microphone. The system promotes an improvement in evacuation protocols under critical operational scenarios.
Abstract:The paper centres on an assessment of the modelling approaches for the processing of signals in CW and FMCW radar-based systems for the detection of vital signs. It is shown that the use of the widely adopted phase extraction method, which relies on the approximation of the target as a single point scatterer, has limitations in respect of the simultaneous estimation of both respiratory and heart rates. A method based on a velocity spectrum is proposed as an alternative with the ability to treat a wider range of application scenarios.
Abstract:The treatment of interfering motion contributions remains one of the key challenges in the domain of radar-based vital sign monitoring. Removal of the interference to extract the vital sign contributions is demanding due to overlapping Doppler bands, the complex structure of the interference motions and significant variations in the power levels of their contributions. A novel approach to the removal of interference through the use of a probabilistic deep learning model is presented. Results show that a convolutional encoder-decoder neural network with a variational objective is capable of learning a meaningful representation space of vital sign Doppler-time distribution facilitating their extraction from a mixture signal. The approach is tested on semi-experimental data containing real vital sign signatures and simulated returns from interfering body motions. The application of the proposed network enhances the extraction of the micro-Doppler frequency corresponding to the respiration rate is demonstrated.
Abstract:Deep learning techniques are subject to increasing adoption for a wide range of micro-Doppler applications, where predictions need to be made based on time-frequency signal representations. Most, if not all, of the reported applications focus on translating an existing deep learning framework to this new domain with no adjustment made to the objective function. This practice results in a missed opportunity to encourage the model to prioritize features that are particularly relevant for micro-Doppler applications. Thus the paper introduces a micro-Doppler coherence loss, minimized when the normalized power of micro-Doppler oscillatory components between input and output is matched. The experiments conducted on real data show that the application of the introduced loss results in models more resilient to noise.
Abstract:Convolutional neural networks have often been proposed for processing radar Micro-Doppler signatures, most commonly with the goal of classifying the signals. The majority of works tend to disregard phase information from the complex time-frequency representation. Here, the utility of the phase information, as well as the optimal format of the Doppler-time input for a convolutional neural network, is analysed. It is found that the performance achieved by convolutional neural network classifiers is heavily influenced by the type of input representation, even across formats with equivalent information. Furthermore, it is demonstrated that the phase component of the Doppler-time representation contains rich information useful for classification and that unwrapping the phase in the temporal dimension can improve the results compared to a magnitude-only solution, improving accuracy from 0.920 to 0.938 on the tested human activity dataset. Further improvement of 0.947 is achieved by training a linear classifier on embeddings from multiple-formats.
Abstract:With the great capabilities of deep classifiers for radar data processing come the risks of learning dataset-specific features that do not generalize well. In this work, the robustness of two deep convolutional architectures, trained and tested on the same data, is evaluated. When standard training practice is followed, both classifiers exhibit sensitivity to subtle temporal shifts of the input representation, an augmentation that carries minimal semantic content. Furthermore, the models are extremely susceptible to adversarial examples. Both small temporal shifts and adversarial examples are a result of a model overfitting on features that do not generalize well. As a remedy, it is shown that training on adversarial examples and temporally augmented samples can reduce this effect and lead to models that generalise better. Finally, models operating on cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
Abstract:A variety of compression methods based on encoding images as weights of a neural network have been recently proposed. Yet, the potential of similar approaches for video compression remains unexplored. In this work, we suggest a set of experiments for testing the feasibility of compressing video using two architectural paradigms, coordinate-based MLP (CbMLP) and convolutional network. Furthermore, we propose a novel technique of neural weight stepping, where subsequent frames of a video are encoded as low-entropy parameter updates. To assess the feasibility of the considered approaches, we will test the video compression performance on several high-resolution video datasets and compare against existing conventional and neural compression techniques.
Abstract:Coordinate-based Multilayer Perceptron (MLP) networks, despite being capable of learning neural implicit representations, are not performant for internal image synthesis applications. Convolutional Neural Networks (CNNs) are typically used instead for a variety of internal generative tasks, at the cost of a larger model. We propose Neural Knitwork, an architecture for neural implicit representation learning of natural images that achieves image synthesis by optimizing the distribution of image patches in an adversarial manner and by enforcing consistency between the patch predictions. To the best of our knowledge, this is the first implementation of a coordinate-based MLP tailored for synthesis tasks such as image inpainting, super-resolution, and denoising. We demonstrate the utility of the proposed technique by training on these three tasks. The results show that modeling natural images using patches, rather than pixels, produces results of higher fidelity. The resulting model requires 80% fewer parameters than alternative CNN-based solutions while achieving comparable performance and training time.
Abstract:This paper addresses the challenge of classifying polarimetric SAR images by leveraging the peculiar characteristics of the polarimetric covariance matrix (PCM). To this end, a general framework to solve a multiple hypothesis test is introduced with the aim to detect and classify contextual spatial variations in polarimetric SAR images. Specifically, under the null hypothesis, only an unknown structure is assumed for data belonging to a 2-dimensional spatial sliding window, whereas under each alternative hypothesis, data are partitioned into subsets sharing different structures. The problem of partition estimation is solved by resorting to hidden random variables representative of covariance structure classes and the expectation-maximization algorithm. The effectiveness of the proposed detection strategies is demonstrated on both simulated and real polarimetric SAR data also in comparison with existing classification algorithms.