Abstract:Array synthetic aperture radar (SAR) three-dimensional (3D) imaging can obtain 3D information of the target region, which is widely used in environmental monitoring and scattering information measurement. In recent years, with the development of compressed sensing (CS) theory, sparse signal processing is used in array SAR 3D imaging. Compared with matched filter (MF), sparse SAR imaging can effectively improve image quality. However, sparse imaging based on handcrafted regularization functions suffers from target information loss in few observed SAR data. Therefore, in this article, a general 3D sparse imaging framework based on Regulation by Denoising (RED) and proximal gradient descent type method for array SAR is presented. Firstly, we construct explicit prior terms via state-of-the-art denoising operators instead of regularization functions, which can improve the accuracy of sparse reconstruction and preserve the structure information of the target. Then, different proximal gradient descent type methods are presented, including a generalized alternating projection (GAP) and an alternating direction method of multiplier (ADMM), which is suitable for high-dimensional data processing. Additionally, the proposed method has robust convergence, which can achieve sparse reconstruction of 3D SAR in few observed SAR data. Extensive simulations and real data experiments are conducted to analyze the performance of the proposed method. The experimental results show that the proposed method has superior sparse reconstruction performance.
Abstract:This work focuses on multi-target tracking in Video synthetic aperture radar. Specifically, we refer to tracking based on targets' shadows. Current methods have limited accuracy as they fail to consider shadows' characteristics and surroundings fully. Shades are low-scattering and varied, resulting in missed tracking. Surroundings can cause interferences, resulting in false tracking. To solve these, we propose a shadow-oriented multi-target tracking method (SOTrack). To avoid false tracking, a pre-processing module is proposed to enhance shadows from surroundings, thus reducing their interferences. To avoid missed tracking, a detection method based on deep learning is designed to thoroughly learn shadows' features, thus increasing the accurate estimation. And further, a recall module is designed to recall missed shadows. We conduct experiments on measured data. Results demonstrate that, compared with other methods, SOTrack achieves much higher performance in tracking accuracy-18.4%. And ablation study confirms the effectiveness of the proposed modules.
Abstract:Benefiting from a relatively larger aperture's angle, and in combination with a wide transmitting bandwidth, near-field synthetic aperture radar (SAR) provides a high-resolution image of a target's scattering distribution-hot spots. Meanwhile, imaging result suffers inevitable degradation from sidelobes, clutters, and noises, hindering the information retrieval of the target. To restore the image, current methods make simplified assumptions; for example, the point spread function (PSF) is spatially consistent, the target consists of sparse point scatters, etc. Thus, they achieve limited restoration performance in terms of the target's shape, especially for complex targets. To address these issues, a preliminary study is conducted on restoration with the recent promising deep learning inverse technique in this work. We reformulate the degradation model into a spatially variable complex-convolution model, where the near-field SAR's system response is considered. Adhering to it, a model-based deep learning network is designed to restore the image. A simulated degraded image dataset from multiple complex target models is constructed to validate the network. All the images are formulated using the electromagnetic simulation tool. Experiments on the dataset reveal their effectiveness. Compared with current methods, superior performance is achieved regarding the target's shape and energy estimation.
Abstract:This work focuses on 3D Radar imaging inverse problems. Current methods obtain undifferentiated results that suffer task-depended information retrieval loss and thus don't meet the task's specific demands well. For example, biased scattering energy may be acceptable for screen imaging but not for scattering diagnosis. To address this issue, we propose a new task-oriented imaging framework. The imaging principle is task-oriented through an analysis phase to obtain task's demands. The imaging model is multi-cognition regularized to embed and fulfill demands. The imaging method is designed to be general-ized, where couplings between cognitions are decoupled and solved individually with approximation and variable-splitting techniques. Tasks include scattering diagnosis, person screen imaging, and parcel screening imaging are given as examples. Experiments on data from two systems indicate that the pro-posed framework outperforms the current ones in task-depended information retrieval.
Abstract:Deep learning (DL)-based tomographic SAR imaging algorithms are gradually being studied. Typically, they use an unfolding network to mimic the iterative calculation of the classical compressive sensing (CS)-based methods and process each range-azimuth unit individually. However, only one-dimensional features are effectively utilized in this way. The correlation between adjacent resolution units is ignored directly. To address that, we propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features. Guided by the deep unfolding methodology, a two-dimensional deep unfolding imaging network is constructed. On the basis of it, we add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively. Meanwhile, to train the proposed multifeature-based imaging network, we construct a tomoSAR simulation dataset consisting entirely of simulation data of buildings. Experiments verify the effectiveness of the model. Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
Abstract:Millimeter-wave (mmW) radar is widely applied to advanced autopilot assistance systems. However, its small antenna aperture causes a low imaging resolution. In this paper, a new distributed mmW radar system is designed to solve this problem. It forms a large sparse virtual planar array to enlarge the aperture, using multiple-input and multiple-output (MIMO) processing. However, in this system, traditional imaging methods cannot apply to the sparse array. Therefore, we also propose a 3D super-resolution imaging method specifically for this system in this paper. The proposed method consists of three steps: (1) using range FFT to get range imaging, (2) using 2D adaptive diagonal loading iterative adaptive approach (ADL-IAA) to acquire 2D super-resolution imaging, which can satisfy this sparsity under single-measurement, (3) using constant false alarm (CFAR) processing to gain final 3D super-resolution imaging. The simulation results show the proposed method can significantly improve imaging resolution under the sparse array and single-measurement.
Abstract:Inevitable interferences exist for the SAR system, adversely affecting the imaging quality. However, current analysis and suppression methods mainly focus on the far-field situation. Due to different sources and characteristics of interferences, they are not applicable in the near field. To bridge this gap, in the first time, analysis and the suppression method of interferences in near-field SAR are presented in this work. We find that echoes from both the nadir points and the antenna coupling are the main causes, which have the constant-time-delay feature. To characterize this, we further establish an analytical model. It reveals that their patterns in 1D, 2D and 3D imaging results are all comb-like, while those of targets are point-like. Utilizing these features, a suppression method in image domain is proposed based on low-rank reconstruction. Measured data are used to validate the correctness of our analysis and the effectiveness of the suppression method.
Abstract:Images of near-field SAR contains spatial-variant sidelobes and clutter, subduing the image quality. Current image restoration methods are only suitable for small observation angle, due to their assumption of 2D spatial-invariant degradation operation. This limits its potential for large-scale objects imaging, like the aircraft. To ease this restriction, in this work an image restoration method based on the 2D spatial-variant deconvolution is proposed. First, the image degradation is seen as a complex convolution process with 2D spatial-variant operations. Then, to restore the image, the process of deconvolution is performed by cyclic coordinate descent algorithm. Experiments on simulation and measured data validate the effectiveness and superiority of the proposed method. Compared with current methods, higher precision estimation of the targets' amplitude and position is obtained.
Abstract:With the booming of Convolutional Neural Networks (CNNs), CNNs such as VGG-16 and ResNet-50 widely serve as backbone in SAR ship detection. However, CNN based backbone is hard to model long-range dependencies, and causes the lack of enough high-quality semantic information in feature maps of shallow layers, which leads to poor detection performance in complicated background and small-sized ships cases. To address these problems, we propose a SAR ship detection method based on Swin Transformer and Feature Enhancement Feature Pyramid Network (FEFPN). Swin Transformer serves as backbone to model long-range dependencies and generates hierarchical features maps. FEFPN is proposed to further improve the quality of feature maps by gradually enhancing the semantic information of feature maps at all levels, especially feature maps in shallow layers. Experiments conducted on SAR ship detection dataset (SSDD) reveal the advantage of our proposed methods.
Abstract:Interferometric Synthetic Aperture Radar (InSAR) Imaging methods are usually based on algorithms of match-filtering type, without considering the scene's characteristic, which causes limited imaging quality. Besides, post-processing steps are inevitable, like image registration, flat-earth phase removing and phase noise filtering. To solve these problems, we propose a new InSAR imaging method. First, to enhance the imaging quality, we propose a new imaging framework base on 2D sparse regularization, where the characteristic of scene is embedded. Second, to avoid the post processing steps, we establish a new forward observation process, where the back-projection imaging method is embedded. Third, a forward and backward iterative solution method is proposed based on proximal gradient descent algorithm. Experiments on simulated and measured data reveal the effectiveness of the proposed method. Compared with the conventional method, higher quality interferogram can be obtained directly from raw echoes without post-processing. Besides, in the under-sampling situation, it's also applicable.