Abstract:The significance of distributed learning and inference algorithms in Internet of Things (IoT) network is growing since they flexibly distribute computation load between IoT devices and the infrastructure, enhance data privacy, and minimize latency. However, a notable challenge stems from the influence of communication channel conditions on their performance. In this work, we introduce COMSPLIT: a novel communication-aware design for split learning (SL) and inference paradigm tailored to processing time series data in IoT networks. COMSPLIT provides a versatile framework for deploying adaptable SL in IoT networks affected by diverse channel conditions. In conjunction with the integration of an early-exit strategy, and addressing IoT scenarios containing devices with heterogeneous computational capabilities, COMSPLIT represents a comprehensive design solution for communication-aware SL in IoT networks. Numerical results show superior performance of COMSPLIT compared to vanilla SL approaches (that assume ideal communication channel), demonstrating its ability to offer both design simplicity and adaptability to different channel conditions.
Abstract:Establishing and maintaining 5G mmWave vehicular connectivity poses a significant challenge due to high user mobility that necessitates frequent triggering of beam switching procedures. Departing from reactive beam switching based on the user device channel state feedback, proactive beam switching prepares in advance for upcoming beam switching decisions by exploiting accurate channel state information (CSI) prediction. In this paper, we develop a framework for autonomous self-trained CSI prediction for mmWave vehicular users where a base station (gNB) collects and labels a dataset that it uses for training recurrent neural network (RNN)-based CSI prediction model. The proposed framework exploits the CSI feedback from vehicular users combined with overhearing the C-V2X cooperative awareness messages (CAMs) they broadcast. We implement and evaluate the proposed framework using deepMIMO dataset generation environment and demonstrate its capability to provide accurate CSI prediction for 5G mmWave vehicular users. CSI prediction model is trained and its capability to provide accurate CSI predictions from various input features are investigated.
Abstract:In this paper, we propose a novel decoding method for Quantum Low-Density Parity-Check (QLDPC) codes based on Graph Neural Networks (GNNs). Similar to the Belief Propagation (BP)-based QLDPC decoders, the proposed GNN-based QLDPC decoder exploits the sparse graph structure of QLDPC codes and can be implemented as a message-passing decoding algorithm. We compare the proposed GNN-based decoding algorithm against selected classes of both conventional and neural-enhanced QLDPC decoding algorithms across several QLDPC code designs. The simulation results demonstrate excellent performance of GNN-based decoders along with their low complexity compared to competing methods.
Abstract:Distributed learning and inference algorithms have become indispensable for IoT systems, offering benefits such as workload alleviation, data privacy preservation, and reduced latency. This paper introduces an innovative approach that utilizes unmanned aerial vehicles (UAVs) as a coverage extension relay for IoT environmental monitoring in rural areas. Our method integrates a split learning (SL) strategy between edge devices, a UAV and a server to enhance adaptability and performance of inference mechanisms. By employing UAVs as a relay and by incorporating SL, we address connectivity and resource constraints for applications of learning in IoT in remote settings. Our system model accounts for diverse channel conditions to determine the most suitable transmission strategy for optimal system behaviour. Through simulation analysis, the proposed approach demonstrates its robustness and adaptability, even excelling under adverse channel conditions. Integrating UAV relaying and the SL paradigm offers significant flexibility to the server, enabling adaptive strategies that consider various trade-offs beyond simply minimizing overall inference quality.
Abstract:Enabling real-time communication in Industrial Internet of Things (IIoT) networks is crucial to support autonomous, self-organized and re-configurable industrial automation for Industry 4.0 and the forthcoming Industry 5.0. In this paper, we consider a SIC-assisted real-time IIoT network, in which sensor nodes generate reports according to an event-generation probability that is specific for the monitored phenomena. The reports are delivered over a block-fading channel to a common Access Point (AP) in slotted ALOHA fashion, which leverages the imbalances in the received powers among the contending users and applies successive interference cancellation (SIC) to decode user packets from the collisions. We provide an extensive analytical treatment of the setup, deriving the Age of Information (AoI), throughput and deadline violation probability, when the AP has access to both the perfect as well as the imperfect channel-state information. We show that adopting SIC improves all the performance parameters with respect to the standard slotted ALOHA, as well as to an age-dependent access method. The analytical results agree with the simulation based ones, demonstrating that investing in the SIC capability at the receiver enables this simple access method to support timely and efficient information delivery in IIoT networks.
Abstract:End-to-end design of communication systems using deep autoencoders (AEs) is gaining attention due to its flexibility and excellent performance. Besides single-user transmission, AE-based design is recently explored in multi-user setup, e.g., for designing constellations for non-orthogonal multiple access (NOMA). In this paper, we further advance the design of AE-based downlink NOMA by introducing weighted loss function in the AE training. By changing the weight coefficients, one can flexibly tune the constellation design to balance error probability of different users, without relying on explicit information about their channel quality. Combined with the SICNet decoder, we demonstrate a significant improvement in achievable levels and flexible control of error probability of different users using the proposed weighted AE-based framework.
Abstract:As phasor measurement units (PMUs) become more widely used in transmission power systems, a fast state estimation (SE) algorithm that can take advantage of their high sample rates is needed. To accomplish this, we present a method that uses graph neural networks (GNNs) to learn complex bus voltage estimates from PMU voltage and current measurements. We propose an original implementation of GNNs over the power system's factor graph to simplify the integration of various types and quantities of measurements on power system buses and branches. Furthermore, we augment the factor graph to improve the robustness of GNN predictions. This model is highly efficient and scalable, as its computational complexity is linear with respect to the number of nodes in the power system. Training and test examples were generated by randomly sampling sets of power system measurements and annotated with the exact solutions of linear SE with PMUs. The numerical results demonstrate that the GNN model provides an accurate approximation of the SE solutions. Furthermore, errors caused by PMU malfunctions or communication failures that would normally make the SE problem unobservable have a local effect and do not deteriorate the results in the rest of the power system.
Abstract:Data-driven state estimation (SE) is becoming increasingly important in modern power systems, as it allows for more efficient analysis of system behaviour using real-time measurement data. This paper thoroughly evaluates a phasor measurement unit-only state estimator based on graph neural networks (GNNs) applied over factor graphs. To assess the sample efficiency of the GNN model, we perform multiple training experiments on various training set sizes. Additionally, to evaluate the scalability of the GNN model, we conduct experiments on power systems of various sizes. Our results show that the GNN-based state estimator exhibits high accuracy and efficient use of data. Additionally, it demonstrated scalability in terms of both memory usage and inference time, making it a promising solution for data-driven SE in modern power systems.
Abstract:Electrical power systems are increasing in size, complexity, as well as dynamics due to the growing integration of renewable energy resources, which have sporadic power generation. This necessitates the development of near real-time power system algorithms, demanding lower computational complexity regarding the power system size. Considering the growing trend in the collection of historical measurement data and recent advances in the rapidly developing deep learning field, the main goal of this paper is to provide a review of recent deep learning-based power system monitoring and optimization algorithms. Electrical utilities can benefit from this review by re-implementing or enhancing the algorithms traditionally used in energy management systems (EMS) and distribution management systems (DMS).
Abstract:Most of today's communication systems are designed to target reliable message recovery after receiving the entire encoded message (codeword). However, in many practical scenarios, the transmission process may be interrupted before receiving the complete codeword. This paper proposes a novel rateless autoencoder (AE)-based code design suitable for decoding the transmitted message before the noisy codeword is fully received. Using particular dropout strategies applied during the training process, rateless AE codes allow to trade off between decoding delay and reliability, providing a graceful improvement of the latter with each additionally received codeword symbol. The proposed rateless AEs significantly outperform the conventional AE designs for scenarios where it is desirable to trade off reliability for lower decoding delay.