Abstract:In-vehicle wireless networks are crucial for advancing smart transportation systems and enhancing interaction among vehicles and their occupants. However, there are limited studies in the current state of the art that investigate the in-vehicle channel characteristics in multiple frequency bands. In this paper, we present measurement campaigns conducted in a van and a car across below 7 GHz, millimeter-wave (mmWave), and sub-Terahertz (Sub-THz) bands. These campaigns aim to compare the channel characteristics for in-vehicle scenarios across various frequency bands. Channel impulse responses (CIRs) were measured at various locations distributed across the engine compartment of both the van and car. The CIR results reveal a high similarity in the delay properties between frequency bands below 7GHz and mmWave bands for the measurements in the engine bay. Sparse channels can be observed at Sub-THz bands in the engine bay scenarios. Channel spatial profiles in the passenger cabin of both the van and car are obtained by the directional scan sounding scheme for three bands. We compare the power angle delay profiles (PADPs) measured at different frequency bands in two line of sight (LOS) scenarios and one non-LOS (NLOS) scenario. Some major \added{multipath components (MPCs)} can be identified in all frequency bands and their trajectories are traced based on the geometry of the vehicles. The angular spread of arrival is also calculated for three scenarios. The analysis of channel characteristics in this paper can enhance our understanding of in-vehicle channels and foster the evolution of in-vehicle wireless networks.
Abstract:Noisy labels, which are common in real-world datasets, can significantly impair the training of deep learning models. However, recent adversarial noise-combating methods overlook the long-tailed distribution of real data, which can significantly harm the effect of denoising strategies. Meanwhile, the mismanagement of noisy labels further compromises the model's ability to handle long-tailed data. To tackle this issue, we propose a novel approach to manage data characterized by both long-tailed distributions and noisy labels. First, we introduce a loss-distance cross-selection module, which integrates class predictions and feature distributions to filter clean samples, effectively addressing uncertainties introduced by noisy labels and long-tailed distributions. Subsequently, we employ optimal transport strategies to generate pseudo-labels for the noise set in a semi-supervised training manner, enhancing pseudo-label quality while mitigating the effects of sample scarcity caused by the long-tailed distribution. We conduct experiments on both synthetic and real-world datasets, and the comprehensive experimental results demonstrate that our method surpasses current state-of-the-art methods. Our code will be available in the future.
Abstract:In recent years, deep neural networks (DNNs) have gained remarkable achievement in computer vision tasks, and the success of DNNs often depends greatly on the richness of data. However, the acquisition process of data and high-quality ground truth requires a lot of manpower and money. In the long, tedious process of data annotation, annotators are prone to make mistakes, resulting in incorrect labels of images, i.e., noisy labels. The emergence of noisy labels is inevitable. Moreover, since research shows that DNNs can easily fit noisy labels, the existence of noisy labels will cause significant damage to the model training process. Therefore, it is crucial to combat noisy labels for computer vision tasks, especially for classification tasks. In this survey, we first comprehensively review the evolution of different deep learning approaches for noisy label combating in the image classification task. In addition, we also review different noise patterns that have been proposed to design robust algorithms. Furthermore, we explore the inner pattern of real-world label noise and propose an algorithm to generate a synthetic label noise pattern guided by real-world data. We test the algorithm on the well-known real-world dataset CIFAR-10N to form a new real-world data-guided synthetic benchmark and evaluate some typical noise-robust methods on the benchmark.
Abstract:Omni-directional pathloss, which refers to the pathloss when omni-directional antennas are used at the link ends, are essential for system design and evaluation. In the millimeter-wave (mm-Wave) and beyond bands, high gain directional antennas are widely used for channel measurements due to the significant signal attenuation. Conventional methods for omni-directional pathloss estimation are based on directional scanning sounding (DSS) system, i.e., a single directional antenna placed at the center of a rotator capturing signals from different rotation angles. The omni-directional pathloss is obtained by either summing up all the powers above the noise level or just summing up the powers of detected propagation paths. However, both methods are problematic with relatively wide main beams and high side-lobes provided by the directional antennas. In this letter, directional antenna based virtual antenna array (VAA) system is implemented for omni-directional pathloss estimation. The VAA scheme uses the same measurement system as the DSS, yet it offers high angular resolution (i.e. narrow main beam) and low side-lobes, which is essential for achieving accurate multipath detection in the power angular delay profiles (PADPs) and thereby obtaining accurate omni-directional pathloss. A measurement campaign was designed and conducted in an indoor corridor at 28-30 GHz to verify the effectiveness of the proposed method.