Abstract:This paper develops a unified framework to maximize the network sum-rate in a multi-user, multi-BS downlink terahertz (THz) network by optimizing user associations, number and bandwidth of sub-bands in a THz transmission window (TW), bandwidth of leading and trailing edge-bands in a TW, sub-band assignment, and power allocations. The proposed framework incorporates multi-connectivity and captures the impact of molecular absorption coefficient variations in a TW, beam-squint, molecular absorption noise, and link blockages. To make the problem tractable, we first propose a convex approximation of the molecular absorption coefficient using curve fitting in a TW, determine the feasible bandwidths of the leading and trailing edge-bands, and then derive closed-form optimal solution for the number of sub-bands considering beam-squint constraints. We then decompose joint user associations, sub-band assignment, and power allocation problem into two sub-problems, i.e., \textbf{(i)} joint user association and sub-band assignment, and \textbf{(ii)} power allocation. To solve the former problem, we analytically prove the unimodularity of the constraint matrix which enables us to relax the integer constraint without loss of optimality. To solve power allocation sub-problem, a fractional programming (FP)-based centralized solution as well as an alternating direction method of multipliers (ADMM)-based light-weight distributed solution is proposed. The overall problem is then solved using alternating optimization until convergence. Complexity analysis of the algorithms and numerical convergence are presented. Numerical findings validate the effectiveness of the proposed algorithms and extract useful insights about the interplay of the density of base stations (BSs), Average order of multi-connectivity (AOM), molecular absorption, {hardware impairment}, {imperfect CSI}, and link blockages.
Abstract:Large Language Models (LLMs) such as ChatGPT and LlaMA are advancing rapidly in generative Artificial Intelligence (AI), but their immense size poses significant challenges, such as huge training and inference costs, substantial energy demands, and limitations for on-site deployment. Traditional compression methods such as pruning, distillation, and low-rank approximation focus on reducing the effective number of neurons in the network, while quantization focuses on reducing the numerical precision of individual weights to reduce the model size while keeping the number of neurons fixed. While these compression methods have been relatively successful in practice, there's no compelling reason to believe that truncating the number of neurons is an optimal strategy. In this context, this paper introduces CompactifAI, an innovative LLM compression approach using quantum-inspired Tensor Networks that focuses on the model's correlation space instead, allowing for a more controlled, refined and interpretable model compression. Our method is versatile and can be implemented with - or on top of - other compression techniques. As a benchmark, we demonstrate that CompactifAI alone enables compression of the LlaMA-2 7B model to only $30\%$ of its original size while recovering over $90\%$ of the original accuracy after a brief distributed retraining.
Abstract:Deep neural networks (DNNs) are emerging as a potential solution to solve NP-hard wireless resource allocation problems. However, in the presence of intricate constraints, e.g., users' quality-of-service (QoS) constraints, guaranteeing constraint satisfaction becomes a fundamental challenge. In this paper, we propose a novel unsupervised learning framework to solve the classical power control problem in a multi-user interference channel, where the objective is to maximize the network sumrate under users' minimum data rate or QoS requirements and power budget constraints. Utilizing a differentiable projection function, two novel deep learning (DL) solutions are pursued. The first is called Deep Implicit Projection Network (DIPNet), and the second is called Deep Explicit Projection Network (DEPNet). DIPNet utilizes a differentiable convex optimization layer to implicitly define a projection function. On the other hand, DEPNet uses an explicitly-defined projection function, which has an iterative nature and relies on a differentiable correction process. DIPNet requires convex constraints; whereas, the DEPNet does not require convexity and has a reduced computational complexity. To enhance the sum-rate performance of the proposed models even further, Frank-Wolfe algorithm (FW) has been applied to the output of the proposed models. Extensive simulations depict that the proposed DNN solutions not only improve the achievable data rate but also achieve zero constraint violation probability, compared to the existing DNNs. The proposed solutions outperform the classic optimization methods in terms of computation time complexity.
Abstract:There exists many resource allocation problems in the field of wireless communications which can be formulated as the generalized assignment problems (GAP). GAP is a generic form of linear sum assignment problem (LSAP) and is more challenging to solve owing to the presence of both equality and inequality constraints. We propose a novel deep unsupervised learning (DUL) approach to solve GAP in a time-efficient manner. More specifically, we propose a new approach that facilitates to train a deep neural network (DNN) using a customized loss function. This customized loss function constitutes the objective function and penalty terms corresponding to both equality and inequality constraints. Furthermore, we propose to employ a Softmax activation function at the output of DNN along with tensor splitting which simplifies the customized loss function and guarantees to meet the equality constraint. As a case-study, we consider a typical user-association problem in a wireless network, formulate it as GAP, and consequently solve it using our proposed DUL approach. Numerical results demonstrate that the proposed DUL approach provides near-optimal results with significantly lower time-complexity.