Abstract:With the great success of deep learning (DL) in image classification, speech recognition, and other fields, more and more studies have applied various neural networks (NNs) to wireless resource allocation. Generally speaking, these artificial intelligent (AI) models are trained under some special learning hypotheses, especially that the statistics of the training data are static during the training stage. However, the distribution of channel state information (CSI) is constantly changing in the real-world wireless communication environment. Therefore, it is essential to study effective dynamic DL technologies to solve wireless resource allocation problems. In this paper, we propose a novel framework, named meta-gating, for solving resource allocation problems in an episodically dynamic wireless environment, where the CSI distribution changes over periods and remains constant within each period. The proposed framework, consisting of an inner network and an outer network, aims to adapt to the dynamic wireless environment by achieving three important goals, i.e., seamlessness, quickness and continuity. Specifically, for the former two goals, we propose a training method by combining a model-agnostic meta-learning (MAML) algorithm with an unsupervised learning mechanism. With this training method, the inner network is able to fast adapt to different channel distributions because of the good initialization. As for the goal of continuity, the outer network can learn to evaluate the importance of inner network's parameters under different CSI distributions, and then decide which subset of the inner network should be activated through the gating operation. Additionally, we theoretically analyze the performance of the proposed meta-gating framework.
Abstract:As an efficient graph analytical tool, graph neural networks (GNNs) have special properties that are particularly fit for the characteristics and requirements of wireless communications, exhibiting good potential for the advancement of next-generation wireless communications. This article aims to provide a comprehensive overview of the interplay between GNNs and wireless communications, including GNNs for wireless communications (GNN4Com) and wireless communications for GNNs (Com4GNN). In particular, we discuss GNN4Com based on how graphical models are constructed and introduce Com4GNN with corresponding incentives. We also highlight potential research directions to promote future research endeavors for GNNs in wireless communications.
Abstract:As an efficient neural network model for graph data, graph neural networks (GNNs) recently find successful applications for various wireless optimization problems. Given that the inference stage of GNNs can be naturally implemented in a decentralized manner, GNN is a potential enabler for decentralized control/management in the next-generation wireless communications. Privacy leakage, however, may occur due to the information exchanges among neighbors during decentralized inference with GNNs. To deal with this issue, in this paper, we analyze and enhance the privacy of decentralized inference with GNNs in wireless networks. Specifically, we adopt local differential privacy as the metric, and design novel privacy-preserving signals as well as privacy-guaranteed training algorithms to achieve privacy-preserving inference. We also define the SNR-privacy trade-off function to analyze the performance upper bound of decentralized inference with GNNs in wireless networks. To further enhance the communication and computation efficiency, we adopt the over-the-air computation technique and theoretically demonstrate its advantage in privacy preservation. Through extensive simulations on the synthetic graph data, we validate our theoretical analysis, verify the effectiveness of proposed privacy-preserving wireless signaling and privacy-guaranteed training algorithm, and offer some guidance on practical implementation.
Abstract:Graph neural network (GNN) is an efficient neural network model for graph data and is widely used in different fields, including wireless communications. Different from other neural network models, GNN can be implemented in a decentralized manner with information exchanges among neighbors, making it a potentially powerful tool for decentralized control in wireless communication systems. The main bottleneck, however, is wireless channel impairments that deteriorate the prediction robustness of GNN. To overcome this obstacle, we analyze and enhance the robustness of the decentralized GNN in different wireless communication systems in this paper. Specifically, using a GNN binary classifier as an example, we first develop a methodology to verify whether the predictions are robust. Then, we analyze the performance of the decentralized GNN binary classifier in both uncoded and coded wireless communication systems. To remedy imperfect wireless transmission and enhance the prediction robustness, we further propose novel retransmission mechanisms for the above two communication systems, respectively. Through simulations on the synthetic graph data, we validate our analysis, verify the effectiveness of the proposed retransmission mechanisms, and provide some insights for practical implementation.
Abstract:The conventional federated learning (FedL) architecture distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server. FedL ignores two important characteristics of contemporary wireless networks, however: (i) the network may contain heterogeneous communication/computation resources, while (ii) there may be significant overlaps in devices' local data distributions. In this work, we develop a novel optimization methodology that jointly accounts for these factors via intelligent device sampling complemented by device-to-device (D2D) offloading. Our optimization aims to select the best combination of sampled nodes and data offloading configuration to maximize FedL training accuracy subject to realistic constraints on the network topology and device capabilities. Theoretical analysis of the D2D offloading subproblem leads to new FedL convergence bounds and an efficient sequential convex optimizer. Using this result, we develop a sampling methodology based on graph convolutional networks (GCNs) which learns the relationship between network attributes, sampled nodes, and resulting offloading that maximizes FedL accuracy. Through evaluation on real-world datasets and network measurements from our IoT testbed, we find that our methodology while sampling less than 5% of all devices outperforms conventional FedL substantially both in terms of trained model accuracy and required resource utilization.
Abstract:The combinatorial auction (CA) is an efficient mechanism for resource allocation in different fields, including cloud computing. It can obtain high economic efficiency and user flexibility by allowing bidders to submit bids for combinations of different items instead of only for individual items. However, the problem of allocating items among the bidders to maximize the auctioneers" revenue, i.e., the winner determination problem (WDP), is NP-complete to solve and inapproximable. Existing works for WDPs are generally based on mathematical optimization techniques and most of them focus on the single-unit WDP, where each item only has one unit. On the contrary, few works consider the multi-unit WDP in which each item may have multiple units. Given that the multi-unit WDP is more complicated but prevalent in cloud computing, we propose leveraging machine learning (ML) techniques to develop a novel low-complexity algorithm for solving this problem with negligible revenue loss. Specifically, we model the multi-unit WDP as an augmented bipartite bid-item graph and use a graph neural network (GNN) with half-convolution operations to learn the probability of each bid belonging to the optimal allocation. To improve the sample generation efficiency and decrease the number of needed labeled instances, we propose two different sample generation processes. We also develop two novel graph-based post-processing algorithms to transform the outputs of the GNN into feasible solutions. Through simulations on both synthetic instances and a specific virtual machine (VM) allocation problem in a cloud computing platform, we validate that our proposed method can approach optimal performance with low complexity and has good generalization ability in terms of problem size and user-type distribution.
Abstract:Generalized Benders decomposition (GBD) is a globally optimal algorithm for mixed integer nonlinear programming (MINLP) problems, which are NP-hard and can be widely found in the area of wireless resource allocation. The main idea of GBD is decomposing an MINLP problem into a primal problem and a master problem, which are iteratively solved until their solutions converge. However, a direct implementation of GBD is time- and memory-consuming. The main bottleneck is the high complexity of the master problem, which increases over the iterations. Therefore, we propose to leverage machine learning (ML) techniques to accelerate GBD aiming at decreasing the complexity of the master problem. Specifically, we utilize two different ML techniques, classification and regression, to deal with this acceleration task. In this way, a cut classifier and a cut regressor are learned, respectively, to distinguish between useful and useless cuts. Only useful cuts are added to the master problem and thus the complexity of the master problem is reduced. By using a resource allocation problem in device-to-device communication networks as an example, we validate that the proposed method can reduce the computational complexity of GBD without loss of optimality and has strong generalization ability. The proposed method is applicable for solving various MINLP problems in wireless networks since the designs are invariant for different problems.