Abstract:Large Language Models (LLMs) have transformed task automation and content generation across various domains while incorporating safety filters to prevent misuse. We introduce a novel jailbreaking framework that employs distributed prompt processing combined with iterative refinements to bypass these safety measures, particularly in generating malicious code. Our architecture consists of four key modules: prompt segmentation, parallel processing, response aggregation, and LLM-based jury evaluation. Tested on 500 malicious prompts across 10 cybersecurity categories, the framework achieves a 73.2% Success Rate (SR) in generating malicious code. Notably, our comparative analysis reveals that traditional single-LLM judge evaluation overestimates SRs (93.8%) compared to our LLM jury system (73.2%), with manual verification confirming that single-judge assessments often accept incomplete implementations. Moreover, we demonstrate that our distributed architecture improves SRs by 12% over the non-distributed approach in an ablation study, highlighting both the effectiveness of distributed prompt processing and the importance of robust evaluation methodologies in assessing jailbreak attempts.
Abstract:Ultra-massive multiple-input multiple-output (UM-MIMO) technology is a key enabler for 6G networks, offering exceptional high data rates in millimeter-wave (mmWave) and Terahertz (THz) frequency bands. The deployment of large antenna arrays at high frequencies transitions wireless communication into the radiative near-field, where precise beam alignment becomes essential for accurate channel estimation. Unlike far-field systems, which rely on angular domain only, near-field necessitates beam search across both angle and distance dimensions, leading to substantially higher training overhead. To address this challenge, we propose a discrete Fourier transform (DFT) based beam alignment to mitigate the training overhead. We highlight that the reduced path loss at shorter distances can compensate for the beamforming losses typically associated with using far-field codebooks in near-field scenarios. Additionally, far-field beamforming in the near-field exhibits angular spread, with its width determined by the user's range and angle. Leveraging this relationship, we develop a correlation interferometry (CI) algorithm, termed CI-DFT, to efficiently estimate user angle and range parameters. Simulation results demonstrate that the proposed scheme achieves performance close to exhaustive search in terms of achievable rate while significantly reducing the training overhead by 87.5%.
Abstract:Integrated sensing and communication (ISAC) has emerged as a transformative paradigm, enabling situationally aware and perceptive next-generation wireless networks through the co-design of shared network resources. With the adoption of millimeter-wave (mmWave) and terahertz (THz) frequency bands, ultra-massive MIMO (UM-MIMO) systems and holographic surfaces unlock the potential of near-field (NF) propagation, characterized by spherical wavefronts that facilitate beam manipulation in both angular and range domains. This paper presents a unified approach to near-field beam-training and sensing, introducing a dual-purpose codebook design that employs discrete Fourier transform (DFT)-based codebooks for coarse estimation of sensing parameters and polar codebooks for parameter refinement. Leveraging these range and angle estimates, a customized low-complexity space-time adaptive processing (STAP) technique is proposed for NF-ISAC to detect slow-moving targets and efficiently mitigate clutter. The interplay between codebooks and NF-STAP framework offers three key advantages: reduced communication beam training overhead, improved estimation accuracy, and minimal STAP computational complexity. Simulation results show that the proposed framework can reduce STAP complexity by three orders of magnitude, validating efficacy, and highlighting the potential of the proposed approach to seamlessly integrate NF communication and sensing functionalities in future wireless networks.
Abstract:Radio Frequency Fingerprinting (RFF) techniques promise to authenticate wireless devices at the physical layer based on inherent hardware imperfections introduced during manufacturing. Such RF transmitter imperfections are reflected into over-the-air signals, allowing receivers to accurately identify the RF transmitting source. Recent advances in Machine Learning, particularly in Deep Learning (DL), have improved the ability of RFF systems to extract and learn complex features that make up the device-specific fingerprint. However, integrating DL techniques with RFF and operating the system in real-world scenarios presents numerous challenges. This article identifies and analyzes these challenges while considering the three reference phases of any DL-based RFF system: (i) data collection and preprocessing, (ii) training, and finally, (iii) deployment. Our investigation points out the current open problems that prevent real deployment of RFF while discussing promising future directions, thus paving the way for further research in the area.