Abstract:This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval (MHR). By casting the MHR problem as a sparse recovery problem, we devise the currently proposed, deep-unrolling-based Structured Learned Iterative Shrinkage and Thresholding (S-LISTA) algorithm to solve it efficiently using complex-valued convolutional neural networks with complex-valued activations, which are trained using a supervised regression objective. Afterward, a novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed. At the heart of this method lies the recently proposed Few Spikes (FS) conversion, which is extended by modifying the neuron model's parameters and internal dynamics to account for the inherent coupling between real and imaginary parts in complex-valued computations. Finally, the converted SNNs are mapped onto the SpiNNaker2 neuromorphic board, and a comparison in terms of estimation accuracy and power efficiency between the original CNNs deployed on an NVIDIA Jetson Xavier and the SNNs is being conducted. The measurement results show that the converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
Abstract:Multi-task large language models (MTLLMs) are important for many applications at the wireless edge, where users demand specialized models to handle multiple tasks efficiently. However, training MTLLMs is complex and exhaustive, particularly when tasks are subject to change. Recently, the concept of model fusion via task vectors has emerged as an efficient approach for combining fine-tuning parameters to produce an MTLLM. In this paper, the problem of enabling edge users to collaboratively craft such MTTLMs via tasks vectors is studied, under the assumption of worst-case adversarial attacks. To this end, first the influence of adversarial noise to multi-task model fusion is investigated and a relationship between the so-called weight disentanglement error and the mean squared error (MSE) is derived. Using hypothesis testing, it is directly shown that the MSE increases interference between task vectors, thereby rendering model fusion ineffective. Then, a novel resilient MTLLM fusion (R-MTLLMF) is proposed, which leverages insights about the LLM architecture and fine-tuning process to safeguard task vector aggregation under adversarial noise by realigning the MTLLM. The proposed R-MTLLMF is then compared for both worst-case and ideal transmission scenarios to study the impact of the wireless channel. Extensive model fusion experiments with vision LLMs demonstrate R-MTLLMF's effectiveness, achieving close-to-baseline performance across eight different tasks in ideal noise scenarios and significantly outperforming unprotected model fusion in worst-case scenarios. The results further advocate for additional physical layer protection for a holistic approach to resilience, from both a wireless and LLM perspective.
Abstract:Path planning is a complex problem for many practical applications, particularly in robotics. Existing algorithms, however, are exhaustive in nature and become increasingly complex when additional side constraints are incorporated alongside distance minimization. In this paper, a novel approach using vision language models (VLMs) is proposed for enabling path planning in complex wireless-aware environments. To this end, insights from a digital twin (DT) with real-world wireless ray tracing data are explored in order to guarantee an average path gain threshold while minimizing the trajectory length. First, traditional approaches such as A* are compared to several wireless-aware extensions, and an optimal iterative dynamic programming approach (DP-WA*) is derived, which fully takes into account all path gains and distance metrics within the DT. On the basis of these baselines, the role of VLMs as an alternative assistant for path planning is investigated, and a strategic chain-of-thought tasking (SCoTT) approach is proposed. SCoTT divides the complex planning task into several subproblems and solves each with advanced CoT prompting. Results show that SCoTT achieves very close average path gains compared to DP-WA* while at the same time yielding consistently shorter path lengths. The results also show that VLMs can be used to accelerate DP-WA* by efficiently reducing the algorithm's search space and thus saving up to 62\% in execution time. This work underscores the potential of VLMs in future digital systems as capable assistants for solving complex tasks, while enhancing user interaction and accelerating rapid prototyping under diverse wireless constraints.
Abstract:The unwavering success of deep learning in the past decade led to the increasing prevalence of deep learning methods in various application fields. However, the downsides of deep learning, most prominently its lack of trustworthiness, may not be compatible with safety-critical or high-responsibility applications requiring stricter performance guarantees. Recently, several instances of deep learning applications have been shown to be subject to theoretical limitations of computability, undermining the feasibility of performance guarantees when employed on real-world computers. We extend the findings by studying computability in the deep learning framework from two perspectives: From an application viewpoint in the context of classification problems and a general limitation viewpoint in the context of training neural networks. In particular, we show restrictions on the algorithmic solvability of classification problems that also render the algorithmic detection of failure in computations in a general setting infeasible. Subsequently, we prove algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved. Finally, we end with a positive observation, showing that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
Abstract:Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML), where components of large ML models are outsourced to remote servers. A significant challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming that could jeopardize the learning process. This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding. In this paper, rigorous insights are provided into the influence of jamming LLM word embeddings in SFL by deriving an expression for the ML training loss divergence and showing that it is upper-bounded by the mean squared error (MSE). Based on this analysis, a physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks. R-SFLLM leverages wireless sensing data to gather information on the jamming directions-of-arrival (DoAs) for the purpose of devising a novel, sensing-assisted anti-jamming strategy while jointly optimizing beamforming, user scheduling, and resource allocation. Extensive experiments using BERT and RoBERTa models demonstrate R-SFLLM's effectiveness, achieving close-to-baseline performance across various natural language processing (NLP) tasks and datasets. The proposed methodology further introduces an adversarial training component, where controlled noise exposure significantly enhances the LLM's resilience to perturbed parameters during training. The results show that more noise-sensitive models, such as RoBERTa, benefit from this feature, especially when resource allocation is unfair. It is also shown that worst-case jamming in particular translates into worst-case model outcomes, thereby necessitating the need for jamming-resilient SFL protocols.
Abstract:Native jamming mitigation is essential for addressing security and resilience in future 6G wireless networks. In this paper a resilient-by-design framework for effective anti-jamming in MIMO-OFDM wireless communications is introduced. A novel approach that integrates information from wireless sensing services to develop anti-jamming strategies, which do not rely on any prior information or assumptions on the adversary's concrete setup, is explored. To this end, a method that replaces conventional approaches to noise covariance estimation in anti-jamming with a surrogate covariance model is proposed, which instead incorporates sensing information on the jamming signal's directions-of-arrival (DoAs) to provide an effective approximation of the true jamming strategy. The study further focuses on integrating this novel, sensing-assisted approach into the joint optimization of beamforming, user scheduling and power allocation for a multi-user MIMO-OFDM uplink setting. Despite the NP-hard nature of this optimization problem, it can be effectively solved using an iterative water-filling approach. In order to assess the effectiveness of the proposed sensing-assisted jamming mitigation, the corresponding worst-case jamming strategy is investigated, which aims to minimize the total user sum-rate. Experimental simulations eventually affirm the robustness of our approach against both worst-case and barrage jamming, demonstrating its potential to address a wide range of jamming scenarios. Since such an integration of sensing-assisted information is directly implemented on the physical layer, resilience is incorporated preemptively by-design.
Abstract:This manuscript investigates the information-theoretic limits of integrated sensing and communications (ISAC), aiming for simultaneous reliable communication and precise channel state estimation. We model such a system with a state-dependent discrete memoryless channel (SD-DMC) with present or absent channel feedback and generalized side information at the transmitter and the receiver, where the joint task of message decoding and state estimation is performed at the receiver. The relationship between the achievable communication rate and estimation error, the capacity-distortion (C-D) trade-off, is characterized across different causality levels of the side information. This framework is shown to be capable of modeling various practical scenarios by assigning the side information with different meanings, including monostatic and bistatic radar systems. The analysis is then extended to the two-user degraded broadcast channel, and we derive an achievable C-D region that is tight under certain conditions. To solve the optimization problem arising in the computation of C-D functions/regions, we propose a proximal block coordinate descent (BCD) method, prove its convergence to a stationary point, and derive a stopping criterion. Finally, several representative examples are studied to demonstrate the versatility of our framework and the effectiveness of the proposed algorithm.
Abstract:In this paper, a digital twinning framework for indoor integrated sensing, communications, and robotics is proposed, designed, and implemented. Besides leveraging powerful robotics and ray-tracing technologies, the framework also enables integration with real-world sensors and reactive updates triggered by changes in the environment. The framework is designed with commercial, off-the-shelf components in mind, thus facilitating experimentation in the different areas of communication, sensing, and robotics. Experimental results showcase the feasibility and accuracy of indoor localization using digital twins and validate our implementation both qualitatively and quantitatively.
Abstract:Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. Motivated by the observation that the current evolution of deep learning models necessitates a change in computing technology, we derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. We exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital and analog computing models represented by Turing and Blum-Shub-Smale Machines, respectively. Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems under fairly general conditions, whereas Turing machines cannot guarantee trustworthiness to the same degree.
Abstract:In this paper, we investigate the fundamental limits of MIMO-OFDM integrated sensing and communications (ISAC) systems based on a Bayesian Cram\'er-Rao bound (BCRB) analysis. We derive the BCRB for joint channel parameter estimation and data symbol detection, in which a performance trade-off between both functionalities is observed. We formulate the optimization problem for a linear precoder design and propose the stochastic Riemannian gradient descent (SRGD) approach to solve the non-convex problem. We analyze the optimality conditions and show that SRGD ensures convergence with high probability. The simulation results verify our analyses and also demonstrate a fast convergence speed. Finally, the performance trade-off is illustrated and investigated.