Abstract:Incremental Few-Shot Semantic Segmentation (iFSS) tackles a task that requires a model to continually expand its segmentation capability on novel classes using only a few annotated examples. Typical incremental approaches encounter a challenge that the objective of the base training phase (fitting base classes with sufficient instances) does not align with the incremental learning phase (rapidly adapting to new classes with less forgetting). This disconnect can result in suboptimal performance in the incremental setting. This study introduces a meta-learning-based prototype approach that encourages the model to learn how to adapt quickly while preserving previous knowledge. Concretely, we mimic the incremental evaluation protocol during the base training session by sampling a sequence of pseudo-incremental tasks. Each task in the simulated sequence is trained using a meta-objective to enable rapid adaptation without forgetting. To enhance discrimination among class prototypes, we introduce prototype space redistribution learning, which dynamically updates class prototypes to establish optimal inter-prototype boundaries within the prototype space. Extensive experiments on iFSS datasets built upon PASCAL and COCO benchmarks show the advanced performance of the proposed approach, offering valuable insights for addressing iFSS challenges.
Abstract:Singing voice separation and vocal pitch estimation are pivotal tasks in music information retrieval. Existing methods for simultaneous extraction of clean vocals and vocal pitches can be classified into two categories: pipeline methods and naive joint learning methods. However, the efficacy of these methods is limited by the following problems: On the one hand, pipeline methods train models for each task independently, resulting a mismatch between the data distributions at the training and testing time. On the other hand, naive joint learning methods simply add the losses of both tasks, possibly leading to a misalignment between the distinct objectives of each task. To solve these problems, we propose a Deep Joint Cascade Model (DJCM) for singing voice separation and vocal pitch estimation. DJCM employs a novel joint cascade model structure to concurrently train both tasks. Moreover, task-specific weights are used to align different objectives of both tasks. Experimental results show that DJCM achieves state-of-the-art performance on both tasks, with great improvements of 0.45 in terms of Signal-to-Distortion Ratio (SDR) for singing voice separation and 2.86% in terms of Overall Accuracy (OA) for vocal pitch estimation. Furthermore, extensive ablation studies validate the effectiveness of each design of our proposed model. The code of DJCM is available at https://github.com/Dream-High/DJCM .
Abstract:Massive multiple-input multiple-output (MIMO) enjoys great advantage in 5G wireless communication systems owing to its spectrum and energy efficiency. However, hundreds of antennas require large volumes of pilot overhead to guarantee reliable channel estimation in FDD massive MIMO system. Compressive sensing (CS) has been applied for channel estimation by exploiting the inherent sparse structure of massive MIMO channel but suffer from high complexity. To overcome this challenge, this paper develops a hybrid channel estimation scheme by integrating the model-driven CS and data-driven deep unrolling technique. The proposed scheme consists of a coarse estimation part and a fine correction part to respectively exploit the inter- and intraframe sparsities of channels to greatly reduce the pilot overhead. Theoretical result is provided to indicate the convergence of the fine correction and coarse estimation net. Simulation results are provided to verify that our scheme can estimate MIMO channels with low pilot overhead while guaranteeing estimation accuracy with relatively low complexity.
Abstract:Few-shot segmentation (FSS) is a dense prediction task that aims to infer the pixel-wise labels of unseen classes using only a limited number of annotated images. The key challenge in FSS is to classify the labels of query pixels using class prototypes learned from the few labeled support exemplars. Prior approaches to FSS have typically focused on learning class-wise descriptors independently from support images, thereby ignoring the rich contextual information and mutual dependencies among support-query features. To address this limitation, we propose a joint learning method termed Masked Cross-Image Encoding (MCE), which is designed to capture common visual properties that describe object details and to learn bidirectional inter-image dependencies that enhance feature interaction. MCE is more than a visual representation enrichment module; it also considers cross-image mutual dependencies and implicit guidance. Experiments on FSS benchmarks PASCAL-$5^i$ and COCO-$20^i$ demonstrate the advanced meta-learning ability of the proposed method.
Abstract:In Internet of Things (IoT), radio frequency fingerprints (RFF) technology has been widely used for passive security authentication to identify the special emitter. However, few works took advantage of independent oscillator distortions at the receiver side, and no work has yet considered filtering receiver distortions. In this paper, we investigate the RFF identification (RFFI) involving unknown receiver distortions, where the phase noise caused by each antenna oscillator is independent. Three RFF schemes are proposed according to the number of receiving antennas. When the number is small, the Mutual Information Weighting Scheme (MIWS) is developed by calculating the weighted voting of RFFI result at each antenna; when the number is moderate, the Distortions Filtering Scheme (DFS) is developed by filtering out the channel noise and receiver distortions; when the number is large enough, the Group-Distortions Filtering and Weighting Scheme (GDFWS) is developed, which integrates the advantages of MIWS and DFS. Furthermore, the ability of DFS to filter out the channel noise and receiver distortions is theoretically analyzed at a specific confidence level. Experiments are provided when both channel noise and receiver distortions exist, which verify the effectiveness and robustness of the proposed schemes.
Abstract:Semantic occupancy perception is essential for autonomous driving, as automated vehicles require a fine-grained perception of the 3D urban structures. However, existing relevant benchmarks lack diversity in urban scenes, and they only evaluate front-view predictions. Towards a comprehensive benchmarking of surrounding perception algorithms, we propose OpenOccupancy, which is the first surrounding semantic occupancy perception benchmark. In the OpenOccupancy benchmark, we extend the large-scale nuScenes dataset with dense semantic occupancy annotations. Previous annotations rely on LiDAR points superimposition, where some occupancy labels are missed due to sparse LiDAR channels. To mitigate the problem, we introduce the Augmenting And Purifying (AAP) pipeline to ~2x densify the annotations, where ~4000 human hours are involved in the labeling process. Besides, camera-based, LiDAR-based and multi-modal baselines are established for the OpenOccupancy benchmark. Furthermore, considering the complexity of surrounding occupancy perception lies in the computational burden of high-resolution 3D predictions, we propose the Cascade Occupancy Network (CONet) to refine the coarse prediction, which relatively enhances the performance by ~30% than the baseline. We hope the OpenOccupancy benchmark will boost the development of surrounding occupancy perception algorithms.
Abstract:Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem. Existing solutions typically approach dynamic scenes by applying geometry techniques or utilizing temporal information between several adjacent frames without considering the underlying background distribution in the entire scene or the transmittance over the ray dimension, limiting their performance on static and occlusion areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance fields offers high-quality view synthesis and a 3D solution to $\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene, which is called $\text{D}^4$NeRF. Specifically, it employs a neural representation to capture the scene distribution in the static background and a 6D-input NeRF to represent dynamic objects, respectively. Each ray sample is given an additional occlusion weight to indicate the transmittance lying in the static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic scenes and our urban driving scenes acquired from an autonomous-driving dataset. Extensive experiments demonstrate that our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background. Our code will be released at https://github.com/Luciferbobo/D4NeRF.
Abstract:In recent years, vision-centric perception has flourished in various autonomous driving tasks, including 3D detection, semantic map construction, motion forecasting, and depth estimation. Nevertheless, the latency of vision-centric approaches is too high for practical deployment (e.g., most camera-based 3D detectors have a runtime greater than 300ms). To bridge the gap between ideal research and real-world applications, it is necessary to quantify the trade-off between performance and efficiency. Traditionally, autonomous-driving perception benchmarks perform the offline evaluation, neglecting the inference time delay. To mitigate the problem, we propose the Autonomous-driving StreAming Perception (ASAP) benchmark, which is the first benchmark to evaluate the online performance of vision-centric perception in autonomous driving. On the basis of the 2Hz annotated nuScenes dataset, we first propose an annotation-extending pipeline to generate high-frame-rate labels for the 12Hz raw images. Referring to the practical deployment, the Streaming Perception Under constRained-computation (SPUR) evaluation protocol is further constructed, where the 12Hz inputs are utilized for streaming evaluation under the constraints of different computational resources. In the ASAP benchmark, comprehensive experiment results reveal that the model rank alters under different constraints, suggesting that the model latency and computation budget should be considered as design choices to optimize the practical deployment. To facilitate further research, we establish baselines for camera-based streaming 3D detection, which consistently enhance the streaming performance across various hardware. ASAP project page: https://github.com/JeffWang987/ASAP.
Abstract:Compressive sensing (CS) has recently emerged as an extremely efficient technology of the wideband spectrum sensing. In compressive spectrum sensing (CSS), it is necessary to know the sparsity or the noise information in advance for reliable reconstruction. However, such information is usually absent in practical applications. In this paper, we propose a blind-block orthogonal least squares-based compressive spectrum sensing (B-BOLS-CSS) algorithm, which utilizes a novel blind stopping rule to cut the cords to these prior information. Specifically, we first present both the noiseless and noisy recovery guarantees for the BOLS algorithm based on the mutual incoherence property (MIP). Motivated by them, we then formulate the blind stopping rule, which exploits an $\ell_{2,\infty}$ sufficient statistic to blindly test the support atoms in the remaining measurement matrix. We further evaluate the theoretical performance analysis of the holistic B-BOLS-CSS algorithm by developing a lower bound of the signal-to-noise ratio (SNR) to ensure that the probability of exact recovery is no lower than a given threshold. Simulations not only demonstrate the improvement of our derived theoretical results, but also illustrate that B-BOLS-CSS works well in both low and high SNR environments.
Abstract:This paper proposes two novel schemes of wideband compressive spectrum sensing (CSS) via block orthogonal matching pursuit (BOMP) algorithm, for achieving high sensing accuracy in real time. These schemes aim to reliably recover the spectrum by adaptively adjusting the number of required measurements without inducing unnecessary sampling redundancy. To this end, the minimum number of required measurements for successful recovery is first derived in terms of its probabilistic lower bound. Then, a CSS scheme is proposed by tightening the derived lower bound, where the key is the design of a nonlinear exponential indicator through a general-purpose sampling-controlled algorithm (SCA). In particular, a sampling-controlled BOMP (SC-BOMP) is developed through a holistic integration of the existing BOMP and the proposed SCA. For fast implementation, a modified version of SC-BOMP is further developed by exploring the block orthogonality in the form of sub-coherence of measurement matrices, which allows more compressive sampling in terms of smaller lower bound of the number of measurements. Such a fast SC-BOMP scheme achieves a desired tradeoff between the complexity and the performance. Simulations demonstrate that the two SC-BOMP schemes outperform the other benchmark algorithms.