Abstract:Traditional wireless network design relies on optimization algorithms derived from domain-specific mathematical models, which are often inefficient and unsuitable for dynamic, real-time applications due to high complexity. Deep learning has emerged as a promising alternative to overcome complexity and adaptability concerns, but it faces challenges such as accuracy issues, delays, and limited interpretability due to its inherent black-box nature. This paper introduces a novel approach that integrates optimization theory with deep learning methodologies to address these issues. The methodology starts by constructing the block diagram of the optimization theory-based solution, identifying key building blocks corresponding to optimality conditions and iterative solutions. Selected building blocks are then replaced with deep neural networks, enhancing the adaptability and interpretability of the system. Extensive simulations show that this hybrid approach not only reduces runtime compared to optimization theory based approaches but also significantly improves accuracy and convergence rates, outperforming pure deep learning models.
Abstract:Diffusion models are vastly used in generative AI, leveraging their capability to capture complex data distributions. However, their potential remains largely unexplored in the field of resource allocation in wireless networks. This paper introduces a novel diffusion model-based resource allocation strategy for Wireless Networked Control Systems (WNCSs) with the objective of minimizing total power consumption through the optimization of the sampling period in the control system, and blocklength and packet error probability in the finite blocklength regime of the communication system. The problem is first reduced to the optimization of blocklength only based on the derivation of the optimality conditions. Then, the optimization theory solution collects a dataset of channel gains and corresponding optimal blocklengths. Finally, the Denoising Diffusion Probabilistic Model (DDPM) uses this collected dataset to train the resource allocation algorithm that generates optimal blocklength values conditioned on the channel state information (CSI). Via extensive simulations, the proposed approach is shown to outperform previously proposed Deep Reinforcement Learning (DRL) based approaches with close to optimal performance regarding total power consumption. Moreover, an improvement of up to eighteen-fold in the reduction of critical constraint violations is observed, further underscoring the accuracy of the solution.
Abstract:Future 6G-enabled vehicular networks face the challenge of ensuring ultra-reliable low-latency communication (URLLC) for delivering safety-critical information in a timely manner. Existing resource allocation schemes for vehicle-to-everything (V2X) communication systems primarily rely on traditional optimization-based algorithms. However, these methods often fail to guarantee the strict reliability and latency requirements of URLLC applications in dynamic vehicular environments due to the high complexity and communication overhead of the solution methodologies. This paper proposes a novel deep reinforcement learning (DRL) based framework for the joint power and block length allocation to minimize the worst-case decoding-error probability in the finite block length (FBL) regime for a URLLC-based downlink V2X communication system. The problem is formulated as a non-convex mixed-integer nonlinear programming problem (MINLP). Initially, an algorithm grounded in optimization theory is developed based on deriving the joint convexity of the decoding error probability in the block length and transmit power variables within the region of interest. Subsequently, an efficient event-triggered DRL-based algorithm is proposed to solve the joint optimization problem. Incorporating event-triggered learning into the DRL framework enables assessing whether to initiate the DRL process, thereby reducing the number of DRL process executions while maintaining reasonable reliability performance. Simulation results demonstrate that the proposed event-triggered DRL scheme can achieve 95% of the performance of the joint optimization scheme while reducing the DRL executions by up to 24% for different network settings.
Abstract:Hierarchical Federated Learning (HFL) faces the significant challenge of adversarial or unreliable vehicles in vehicular networks, which can compromise the model's integrity through misleading updates. Addressing this, our study introduces a novel framework that integrates dynamic vehicle selection and robust anomaly detection mechanisms, aiming to optimize participant selection and mitigate risks associated with malicious contributions. Our approach involves a comprehensive vehicle reliability assessment, considering historical accuracy, contribution frequency, and anomaly records. An anomaly detection algorithm is utilized to identify anomalous behavior by analyzing the cosine similarity of local or model parameters during the federated learning (FL) process. These anomaly records are then registered and combined with past performance for accuracy and contribution frequency to identify the most suitable vehicles for each learning round. Dynamic client selection and anomaly detection algorithms are deployed at different levels, including cluster heads (CHs), cluster members (CMs), and the Evolving Packet Core (EPC), to detect and filter out spurious updates. Through simulation-based performance evaluation, our proposed algorithm demonstrates remarkable resilience even under intense attack conditions. Even in the worst-case scenarios, it achieves convergence times at $63$\% as effective as those in scenarios without any attacks. Conversely, in scenarios without utilizing our proposed algorithm, there is a high likelihood of non-convergence in the FL process.
Abstract:Radio Frequency Energy Harvesting (RF-EH) networks are key enablers of massive Internet-of-things by providing controllable and long-distance energy transfer to energy-limited devices. Relays, helping either energy or information transfer, have been demonstrated to significantly improve the performance of these networks. This paper studies the joint relay selection, scheduling, and power control problem in multiple-source-multiple-relay RF-EH networks under nonlinear EH conditions. We first obtain the optimal solution to the scheduling and power control problem for the given relay selection. Then, the relay selection problem is formulated as a classification problem, for which two convolutional neural network (CNN) based architectures are proposed. While the first architecture employs conventional 2D convolution blocks and benefits from skip connections between layers; the second architecture replaces them with inception blocks, to decrease trainable parameter size without sacrificing accuracy for memory-constrained applications. To decrease the runtime complexity further, teacher-student learning is employed such that the teacher network is larger, and the student is a smaller size CNN-based architecture distilling the teacher's knowledge. A novel dichotomous search-based algorithm is employed to determine the best architecture for the student network. Our simulation results demonstrate that the proposed solutions provide lower complexity than the state-of-art iterative approaches without compromising optimality.
Abstract:The usage of federated learning (FL) in Vehicular Ad hoc Networks (VANET) has garnered significant interest in research due to the advantages of reducing transmission overhead and protecting user privacy by communicating local dataset gradients instead of raw data. However, implementing FL in VANETs faces challenges, including limited communication resources, high vehicle mobility, and the statistical diversity of data distributions. In order to tackle these issues, this paper introduces a novel framework for hierarchical federated learning (HFL) over multi-hop clustering-based VANET. The proposed method utilizes a weighted combination of the average relative speed and cosine similarity of FL model parameters as a clustering metric to consider both data diversity and high vehicle mobility. This metric ensures convergence with minimum changes in cluster heads while tackling the complexities associated with non-independent and identically distributed (non-IID) data scenarios. Additionally, the framework includes a novel mechanism to manage seamless transitions of cluster heads (CHs), followed by transferring the most recent FL model parameter to the designated CH. Furthermore, the proposed approach considers the option of merging CHs, aiming to reduce their count and, consequently, mitigate associated overhead. Through extensive simulations, the proposed hierarchical federated learning over clustered VANET has been demonstrated to improve accuracy and convergence time significantly while maintaining an acceptable level of packet overhead compared to previously proposed clustering algorithms and non-clustered VANET.
Abstract:The Ultra-Reliable Low-Latency Communications (URLLC) paradigm in sixth-generation (6G) systems heavily relies on precise channel modeling, especially when dealing with rare and extreme events within wireless communication channels. This paper explores a novel methodology integrating Extreme Value Theory (EVT) and Generative Adversarial Networks (GANs) to achieve the precise channel modeling in real-time. The proposed approach harnesses EVT by employing the Generalized Pareto Distribution (GPD) to model the distribution of extreme events. Subsequently, Generative Adversarial Networks (GANs) are employed to estimate the parameters of the GPD. In contrast to conventional GAN configurations that focus on estimating the overall distribution, the proposed approach involves the incorporation of an additional block within the GAN structure. This specific augmentation is designed with the explicit purpose of directly estimating the parameters of the Generalized Pareto Distribution (GPD). Through extensive simulations across different sample sizes, the proposed GAN based approach consistently demonstrates superior adaptability, surpassing Maximum Likelihood Estimation (MLE), particularly in scenarios with limited sample sizes.
Abstract:Proper determination of the transmission rate in ultra-reliable low latency communication (URLLC) needs to incorporate a confidence interval (CI) for the estimated parameters due to the large amount of data required for their accurate estimation. In this paper, we propose a framework based on the extreme value theory (EVT) for determining the transmission rate along with its corresponding CI for an ultra-reliable communication system. This framework consists of characterizing the statistics of extreme events by fitting the generalized Pareto distribution (GPD) to the channel tail, deriving the GPD parameters and their associated CIs, and obtaining the transmission rate within a confidence interval. Based on the data collected within the engine compartment of Fiat Linea, we demonstrate the accuracy of the estimated rate obtained through the EVT-based framework considering the confidence interval for the GPD parameters. Additionally, we show that proper estimation of the transmission rate based on the proposed framework requires a lower number of samples compared to the traditional extrapolation-based approaches.
Abstract:Diversity schemes play a vital role in improving the performance of ultra-reliable communication systems by transmitting over two or more communication channels to combat fading and co-channel interference. Determining an appropriate transmission strategy that satisfies ultra-reliability constraint necessitates derivation of statistics of channel in ultra-reliable region and, subsequently, integration of these statistics into rate selection while incorporating a confidence interval to account for potential uncertainties that may arise during estimation. In this paper, we propose a novel framework for ultra-reliable real-time transmission considering both spatial diversities and ultra-reliable channel statistics based on multivariate extreme value theory. First, tail distribution of joint received power sequences obtained from different receivers is modeled while incorporating inter-relations of extreme events occurring rarely based on Poisson point process approach in MEVT. The optimum transmission strategies are then developed by determining optimum transmission rate based on estimated joint tail distribution and incorporating confidence intervals into estimations to cope with the availability of limited data. Finally, system reliability is assessed by utilizing outage probability metric. Through analysis of data obtained from the engine compartment of Fiat Linea, our study showcases the effectiveness of proposed methodology in surpassing traditional extrapolation-based approaches. This innovative method not only achieves a higher transmission rate, but also effectively addresses stringent requirements of ultra-reliability. The findings indicate that proposed rate selection framework offers a viable solution for achieving a desired target error probability by employing a higher transmission rate and reducing the amount of training data compared to conventional rate selection methods.
Abstract:Ultra-reliable low latency communication (URLLC) requires the packet error rate to be on the order of $10^{-9}$-$10^{-5}$. Determining the appropriate transmission rate to satisfy this ultra-reliability constraint requires deriving the statistics of the channel in the ultra-reliable region and then incorporating these statistics into the rate selection. In this paper, we propose a framework for determining the rate selection for ultra-reliable communications based on the extreme value theory (EVT). We first model the wireless channel at URLLC by estimating the parameters of the generalized Pareto distribution (GPD) best fitting to the tail distribution of the received powers, i.e., the power values below a certain threshold. Then, we determine the maximum transmission rate by incorporating the Pareto distribution into the rate selection function. Finally, we validate the selected rate by computing the resulting error probability. Based on the data collected within the engine compartment of Fiat Linea, we demonstrate the superior performance of the proposed methodology in determining the maximum transmission rate compared to the traditional extrapolation-based approaches.