Abstract:Fluid antenna system (FAS) as a new version of reconfigurable antenna technologies promoting shape and position flexibility, has emerged as an exciting and possibly transformative technology for wireless communications systems. FAS represents any software-controlled fluidic, conductive or dielectric structure that can dynamically alter antenna's shape and position to change the gain, the radiation pattern, the operating frequency, and other critical radiation characteristics. With its capability, it is highly anticipated that FAS can contribute greatly to the upcoming sixth generation (6G) wireless networks. This article substantiates this thought by addressing four major questions: 1) Is FAS crucial to 6G? 2) How to characterize FAS? 3) What are the applications of FAS? 4) What are the relevant challenges and future research directions? In particular, five promising research directions that underscore the potential of FAS are discussed. We conclude this article by showcasing the impressive performance of FAS.
Abstract:Combining wireless communication with large artificial intelligence (AI) models can open up a myriad of novel application scenarios. In sixth generation (6G) networks, ubiquitous communication and computing resources allow large AI models to serve democratic large AI models-related services to enable real-time applications like autonomous vehicles, smart cities, and Internet of Things (IoT) ecosystems. However, the security considerations and sustainable communication resources limit the deployment of large AI models over distributed wireless networks. This paper provides a comprehensive overview of privacy, security, and trustworthy for distributed wireless large AI model (WLAM). In particular, a detailed privacy and security are analysis for distributed WLAM is fist revealed. The classifications and theoretical findings about privacy and security in distributed WLAM are discussed. Then the trustworthy and ethics for implementing distributed WLAM are described. Finally, the comprehensive applications of distributed WLAM are presented in the context of electromagnetic signal processing.
Abstract:Combining wireless communication with large artificial intelligence (AI) models can open up a myriad of novel application scenarios. In sixth generation (6G) networks, ubiquitous communication and computing resources allow large AI models to serve democratic large AI models-related services to enable real-time applications like autonomous vehicles, smart cities, and Internet of Things (IoT) ecosystems. However, the security considerations and sustainable communication resources limit the deployment of large AI models over distributed wireless networks. This paper provides a comprehensive overview of privacy, security, and trustworthy for distributed wireless large AI model (WLAM). In particular, the detailed privacy and security are analysis for distributed WLAM is fist revealed. The classifications and theoretical findings about privacy and security in distributed WLAM are discussed. Then the trustworthy and ethics for implementing distributed WLAM are described. Finally, the comprehensive applications of distributed WLAM is provided in the aspect of electromagnetic signal processing.
Abstract:This paper investigates the signal detection problem in colored noise with an unknown covariance matrix. In particular, we focus on detecting a non-random signal by capitalizing on the leading eigenvalue (a.k.a. Roy's largest root) of the whitened sample covariance matrix as the test statistic. To this end, the whitened sample covariance matrix is constructed via \(m\)-dimensional \(p \) plausible signal-bearing samples and \(m\)-dimensional \(n \) noise-only samples. Since the signal is non-random, the whitened sample covariance matrix turns out to have a {\it non-central} \(F\)-distribution with a rank-one non-centrality parameter. Therefore, the performance of the test entails the statistical characterization of the leading eigenvalue of the non-central \(F\)-matrix, which we address by deriving its cumulative distribution function (c.d.f.) in closed-form by leveraging the powerful orthogonal polynomial approach in random matrix theory. This new c.d.f. has been instrumental in analyzing the receiver operating characteristic (ROC) of the detector. We also extend our analysis into the high dimensional regime in which \(m,n\), and \(p\) diverge such that \(m/n\) and \(m/p\) remain fixed. It turns out that, when \(m=n\) and fixed, the power of the test improves if the signal-to-noise ratio (SNR) is of at least \(O(p)\), whereas the corresponding SNR in the high dimensional regime is of at least \(O(p^2)\). Nevertheless, more intriguingly, for \(m<n\) with the SNR of order \(O(p)\), the leading eigenvalue does not have power to detect {\it weak} signals in the high dimensional regime.
Abstract:Signal detection in colored noise with an unknown covariance matrix has numerous applications across various scientific and engineering disciplines. The analysis focuses on the square of the condition number \(\kappa^2(\cdot)\), defined as the ratio of the largest to smallest eigenvalue \((\lambda_{\text{max}}/\lambda_{\text{min}})\) of the whitened sample covariance matrix \(\bm{\widehat{\Psi}}\), constructed from \(p\) signal-plus-noise samples and \(n\) noise-only samples, both \(m\)-dimensional. This statistic is denoted as \(\kappa^2(\bm{\widehat{\Psi}})\). A finite-dimensional characterization of the false alarm probability for this statistic under the null and alternative hypotheses has been an open problem. Therefore, in this work, we address this by deriving the cumulative distribution function (c.d.f.) of \(\kappa^2(\bm{\widehat{\Psi}})\) using the powerful orthogonal polynomial approach in random matrix theory. These c.d.f. expressions have been used to statistically characterize the performance of \(\kappa^2(\bm{\widehat{\Psi}})\).
Abstract:The drive toward automating cellular network operations has grown with the increasing complexity of these systems. Despite advancements, full autonomy currently remains out of reach due to reliance on human intervention for modeling network behaviors and defining policies to meet target requirements. Network Digital Twins (NDTs) have shown promise in enhancing network intelligence, but the successful implementation of this technology is constrained by use case-specific architectures, limiting its role in advancing network autonomy. A more capable network intelligence, or "telecommunications brain", is needed to enable seamless, autonomous management of cellular network. Large Language Models (LLMs) have emerged as potential enablers for this vision but face challenges in network modeling, especially in reasoning and handling diverse data types. To address these gaps, we introduce Hermes, a chain of LLM agents that uses "blueprints" for constructing NDT instances through structured and explainable logical steps. Hermes allows automatic, reliable, and accurate network modeling of diverse use cases and configurations, thus marking progress toward fully autonomous network operations.
Abstract:The telecommunications industry's rapid evolution demands intelligent systems capable of managing complex networks and adapting to emerging technologies. While large language models (LLMs) show promise in addressing these challenges, their deployment in telecom environments faces significant constraints due to edge device limitations and inconsistent documentation. To bridge this gap, we present TeleOracle, a telecom-specialized retrieval-augmented generation (RAG) system built on the Phi-2 small language model (SLM). To improve context retrieval, TeleOracle employs a two-stage retriever that incorporates semantic chunking and hybrid keyword and semantic search. Additionally, we expand the context window during inference to enhance the model's performance on open-ended queries. We also employ low-rank adaption for efficient fine-tuning. A thorough analysis of the model's performance indicates that our RAG framework is effective in aligning Phi-2 to the telecom domain in a downstream question and answer (QnA) task, achieving a 30% improvement in accuracy over the base Phi-2 model, reaching an overall accuracy of 81.20%. Notably, we show that our model not only performs on par with the much larger LLMs but also achieves a higher faithfulness score, indicating higher adherence to the retrieved context.
Abstract:As machine intelligence evolves, the need to test and compare the problem-solving abilities of different AI models grows. However, current benchmarks are often overly simplistic, allowing models to perform uniformly well, making it difficult to distinguish their capabilities. Additionally, benchmarks typically rely on static question-answer pairs, which models might memorize or guess. To address these limitations, we introduce the Dynamic Intelligence Assessment (DIA), a novel methodology for testing AI models using dynamic question templates and improved metrics across multiple disciplines such as mathematics, cryptography, cybersecurity, and computer science. The accompanying DIA-Bench dataset, which includes 150 diverse and challenging task templates with mutable parameters, is presented in various formats such as text, PDFs, compiled binaries, and visual puzzles. Our framework introduces four new metrics to assess a model's reliability and confidence across multiple attempts. These metrics revealed that even simple questions are frequently answered incorrectly when posed in varying forms, highlighting significant gaps in models' reliability. Notably, models like GPT-4o tended to overestimate their mathematical abilities, while ChatGPT-4o demonstrated better decision-making and performance through effective tool usage. We evaluated eight state-of-the-art large language models (LLMs) using DIA-Bench, showing that current models struggle with complex tasks and often display unexpectedly low confidence, even with simpler questions. The DIA framework sets a new standard for assessing not only problem-solving but also a model's adaptive intelligence and ability to assess its own limitations. The dataset is publicly available on our project's website.
Abstract:This letter investigates an unmanned aerial vehicle (UAV) network with integrated sensing and communication (ISAC) systems, where multiple UAVs simultaneously sense the locations of ground users and provide communication services with radars. To find the trade-off between communication and sensing (C\&S) in the system, we formulate a multi-objective optimization problem (MOP) to maximize the total network utility and the localization Cram\'er-Rao bounds (CRB) of ground users, which jointly optimizes the deployment and power control of UAVs. Inspired by the huge potential of large language models (LLM) for prediction and inference, we propose an LLM-enabled decomposition-based multi-objective evolutionary algorithm (LEDMA) for solving the highly non-convex MOP. We first adopt a decomposition-based scheme to decompose the MOP into a series of optimization sub-problems. We second integrate LLMs as black-box search operators with MOP-specifically designed prompt engineering into the framework of MOEA to solve optimization sub-problems simultaneously. Numerical results demonstrate that the proposed LEDMA can find the clear trade-off between C\&S and outperforms baseline MOEAs in terms of obtained Pareto fronts and convergence.
Abstract:Recent studies show that large language models (LLMs) struggle with technical standards in telecommunications. We propose a fine-tuned retrieval-augmented generation (RAG) system based on the Phi-2 small language model (SLM) to serve as an oracle for communication networks. Our developed system leverages forward-looking semantic chunking to adaptively determine parsing breakpoints based on embedding similarity, enabling effective processing of diverse document formats. To handle the challenge of multiple similar contexts in technical standards, we employ a re-ranking algorithm to prioritize the most relevant retrieved chunks. Recognizing the limitations of Phi-2's small context window, we implement a recent technique, namely SelfExtend, to expand the context window during inference, which not only boosts the performance but also can accommodate a wider range of user queries and design requirements from customers to specialized technicians. For fine-tuning, we utilize the low-rank adaptation (LoRA) technique to enhance computational efficiency during training and enable effective fine-tuning on small datasets. Our comprehensive experiments demonstrate substantial improvements over existing question-answering approaches in the telecom domain, achieving performance that exceeds larger language models such as GPT-4 (which is about 880 times larger in size). This work presents a novel approach to leveraging SLMs for communication networks, offering a balance of efficiency and performance. This work can serve as a foundation towards agentic language models for networks.