Abstract:This paper considers mutual interference mitigation among automotive radars using frequency-modulated continuous wave (FMCW) signal and multiple-input multiple-output (MIMO) virtual arrays. For the first time, we derive a general interference signal model that fully accounts for not only the time-frequency incoherence, e.g., different FMCW configuration parameters and time offsets, but also the slow-time code MIMO incoherence and array configuration differences between the victim and interfering radars. Along with a standard MIMO-FMCW object signal model, we turn the interference mitigation into a spatial-domain object detection under incoherent MIMO-FMCW interference described by the explicit interference signal model, and propose a constant false alarm rate (CFAR) detector. More specifically, the proposed detector exploits the structural property of the derived interference model at both \emph{transmit} and \emph{receive} steering vector space. We also derive analytical closed-form expressions for probabilities of detection and false alarm. Performance evaluation using both synthetic-level and phased array system-level simulation confirms the effectiveness of our proposed detector over selected baseline methods.
Abstract:This paper considers indoor localization using multi-modal wireless signals including Wi-Fi, inertial measurement unit (IMU), and ultra-wideband (UWB). By formulating the localization as a multi-modal sequence regression problem, a multi-stream recurrent fusion method is proposed to combine the current hidden state of each modality in the context of recurrent neural networks while accounting for the modality uncertainty which is directly learned from its own immediate past states. The proposed method was evaluated on the large-scale SPAWC2021 multi-modal localization dataset and compared with a wide range of baseline methods including the trilateration method, traditional fingerprinting methods, and convolution network-based methods.
Abstract:Complementary to the fine-grained channel state information (CSI) from the physical layer and coarse-grained received signal strength indicator (RSSI) measurements, the mid-grained spatial beam attributes (e.g., beam SNR) that are available at millimeter-wave (mmWave) bands during the mandatory beam training phase can be repurposed for Wi-Fi sensing applications. In this paper, we propose a multi-band Wi-Fi fusion method for Wi-Fi sensing that hierarchically fuses the features from both the fine-grained CSI at sub-6 GHz and the mid-grained beam SNR at 60 GHz in a granularity matching framework. The granularity matching is realized by pairing two feature maps from the CSI and beam SNR at different granularity levels and linearly combining all paired feature maps into a fused feature map with learnable weights. To further address the issue of limited labeled training data, we propose an autoencoder-based multi-band Wi-Fi fusion network that can be pre-trained in an unsupervised fashion. Once the autoencoder-based fusion network is pre-trained, we detach the decoders and append multi-task sensing heads to the fused feature map by fine-tuning the fusion block and re-training the multi-task heads from the scratch. The multi-band Wi-Fi fusion framework is thoroughly validated by in-house experimental Wi-Fi sensing datasets spanning three tasks: 1) pose recognition; 2) occupancy sensing; and 3) indoor localization. Comparison to four baseline methods (i.e., CSI-only, beam SNR-only, input fusion, and feature fusion) demonstrates the granularity matching improves the multi-task sensing performance. Quantitative performance is evaluated as a function of the number of labeled training data, latent space dimension, and fine-tuning learning rates.
Abstract:One of the most essential prerequisites behind a successful task execution of a team of agents is to accurately estimate and track their poses. We consider a cooperative multi-agent positioning problem where each agent performs single-agent positioning until it encounters some other agent. Upon the encounter, the two agents measure their relative pose, and exchange particle clouds representing their poses. We propose a cooperative positioning algorithm which fuses the received information with the locally available measurements and infers an agent's pose within Bayesian framework. The algorithm is scalable to multiple agents, has relatively low computational complexity, admits decentralized implementation across agents, and imposes relatively mild requirements on communication coverage and bandwidth. The experiments indicate that the proposed algorithm considerably improves single-agent positioning accuracy, reduces the convergence time of a particle cloud and, unlike its single-agent positioning counterpart, exhibits immunity to an impeding feature-scarce and symmetric environment layout.