Abstract:Generative models, including denoising diffusion models (DM), are gaining attention in wireless applications due to their ability to learn complex data distributions. In this paper, we propose CoDiPhy, a novel framework that leverages conditional denoising diffusion models to address a wide range of wireless physical layer problems. A key challenge of using DM is the need to assume or approximate Gaussian signal models. CoDiPhy addresses this by incorporating a conditional encoder as a guidance mechanism, mapping problem observations to a latent space and removing the Gaussian constraint. By combining conditional encoding, time embedding layers, and a U-Net-based main neural network, CoDiPhy introduces a noise prediction neural network, replacing the conventional approach used in DM. This adaptation enables CoDiPhy to serve as an effective solution for a wide range of detection, estimation, and predistortion tasks. We demonstrate CoDiPhy's adaptability through two case studies: an OFDM receiver for detection and phase noise compensation for estimation. In both cases, CoDiPhy outperforms conventional methods by a significant margin.
Abstract:Biometrics authentication has become increasingly popular due to its security and convenience; however, traditional biometrics are becoming less desirable in scenarios such as new mobile devices, Virtual Reality, and Smart Vehicles. For example, while face authentication is widely used, it suffers from significant privacy concerns. The collection of complete facial data makes it less desirable for privacy-sensitive applications. Lip authentication, on the other hand, has emerged as a promising biometrics method. However, existing lip-based authentication methods heavily depend on static lip shape when the mouth is closed, which can be less robust due to lip shape dynamic motion and can barely work when the user is speaking. In this paper, we revisit the nature of lip biometrics and extract shape-independent features from the lips. We study the dynamic characteristics of lip biometrics based on articulator motion. Building on the knowledge, we propose a system for shape-independent continuous authentication via lip articulator dynamics. This system enables robust, shape-independent and continuous authentication, making it particularly suitable for scenarios with high security and privacy requirements. We conducted comprehensive experiments in different environments and attack scenarios and collected a dataset of 50 subjects. The results indicate that our system achieves an overall accuracy of 99.06% and demonstrates robustness under advanced mimic attacks and AI deepfake attacks, making it a viable solution for continuous biometric authentication in various applications.