Abstract:Decentralized learning (DL) offers a powerful framework where nodes collaboratively train models without sharing raw data and without the coordination of a central server. In the iterative rounds of DL, models are trained locally, shared with neighbors in the topology, and aggregated with other models received from neighbors. Sharing and merging models contribute to convergence towards a consensus model that generalizes better across the collective data captured at training time. In addition, the energy consumption while sharing and merging model parameters is negligible compared to the energy spent during the training phase. Leveraging this fact, we present SkipTrain, a novel DL algorithm, which minimizes energy consumption in decentralized learning by strategically skipping some training rounds and substituting them with synchronization rounds. These training-silent periods, besides saving energy, also allow models to better mix and finally produce models with superior accuracy than typical DL algorithms that train at every round. Our empirical evaluations with 256 nodes demonstrate that SkipTrain reduces energy consumption by 50% and increases model accuracy by up to 12% compared to D-PSGD, the conventional DL algorithm.
Abstract:Traffic prediction represents one of the crucial tasks for smartly optimizing the mobile network. The research in this topic concentrated in making predictions in a centralized fashion, i.e., by collecting data from the different network elements. This translates to a considerable amount of energy for data transmission and processing. In this work, we propose a novel prediction framework based on edge computing which uses datasets obtained on the edge through a large measurement campaign. Two main Deep Learning architectures are designed, based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), and tested under different training conditions. In addition, Knowledge Transfer Learning (KTL) techniques are employed to improve the performance of the models while reducing the required computational resources. Simulation results show that the CNN architectures outperform the RNNs. An estimation for the needed training energy is provided, highlighting KTL ability to reduce the energy footprint of the models of 60% and 90% for CNNs and RNNs, respectively. Finally, two cutting-edge explainable Artificial Intelligence techniques are employed to interpret the derived learning models.
Abstract:Cellular traffic prediction is a crucial activity for optimizing networks in fifth-generation (5G) networks and beyond, as accurate forecasting is essential for intelligent network design, resource allocation and anomaly mitigation. Although machine learning (ML) is a promising approach to effectively predict network traffic, the centralization of massive data in a single data center raises issues regarding confidentiality, privacy and data transfer demands. To address these challenges, federated learning (FL) emerges as an appealing ML training framework which offers high accurate predictions through parallel distributed computations. However, the environmental impact of these methods is often overlooked, which calls into question their sustainability. In this paper, we address the trade-off between accuracy and energy consumption in FL by proposing a novel sustainability indicator that allows assessing the feasibility of ML models. Then, we comprehensively evaluate state-of-the-art deep learning (DL) architectures in a federated scenario using real-world measurements from base station (BS) sites in the area of Barcelona, Spain. Our findings indicate that larger ML models achieve marginally improved performance but have a significant environmental impact in terms of carbon footprint, which make them impractical for real-world applications.
Abstract:Federated learning (FL) is one of the most appealing alternatives to the standard centralized learning paradigm, allowing heterogeneous set of devices to train a machine learning model without sharing their raw data. However, FL requires a central server to coordinate the learning process, thus introducing potential scalability and security issues. In the literature, server-less FL approaches like gossip federated learning (GFL) and blockchain-enabled federated learning (BFL) have been proposed to mitigate these issues. In this work, we propose a complete overview of these three techniques proposing a comparison according to an integral set of performance indicators, including model accuracy, time complexity, communication overhead, convergence time and energy consumption. An extensive simulation campaign permits to draw a quantitative analysis. In particular, GFL is able to save the 18% of training time, the 68% of energy and the 51% of data to be shared with respect to the CFL solution, but it is not able to reach the level of accuracy of CFL. On the other hand, BFL represents a viable solution for implementing decentralized learning with a higher level of security, at the cost of an extra energy usage and data sharing. Finally, we identify open issues on the two decentralized federated learning implementations and provide insights on potential extensions and possible research directions on this new research field.
Abstract:To meet the growing quest for enhanced network capacity, mobile network operators (MNOs) are deploying dense infrastructures of small cells. This, in turn, increases the power consumption of mobile networks, thus impacting the environment. As a result, we have seen a recent trend of powering mobile networks with harvested ambient energy to achieve both environmental and cost benefits. In this paper, we consider a network of virtualized small cells (vSCs) powered by energy harvesters and equipped with rechargeable batteries, which can opportunistically offload baseband (BB) functions to a grid-connected edge server depending on their energy availability. We formulate the corresponding grid energy and traffic drop rate minimization problem, and propose a distributed deep reinforcement learning (DDRL) solution. Coordination among vSCs is enabled via the exchange of battery state information. The evaluation of the network performance in terms of grid energy consumption and traffic drop rate confirms that enabling coordination among the vSCs via knowledge exchange achieves a performance close to the optimal. Numerical results also confirm that the proposed DDRL solution provides higher network performance, better adaptation to the changing environment, and higher cost savings with respect to a tabular multi-agent reinforcement learning (MRL) solution used as a benchmark.
Abstract:In this paper, we discuss a handover management scheme for Next Generation Self-Organized Networks. We propose to extract experience from full protocol stack data, to make smart handover decisions in a multi-cell scenario, where users move and are challenged by deep zones of an outage. Traditional handover schemes have the drawback of taking into account only the signal strength from the serving, and the target cell, before the handover. However, we believe that the expected Quality of Experience (QoE) resulting from the decision of target cell to handover to, should be the driving principle of the handover decision. In particular, we propose two models based on multi-layer many-to-one LSTM architecture, and a multi-layer LSTM AutoEncoder (AE) in conjunction with a MultiLayer Perceptron (MLP) neural network. We show that using experience extracted from data, we can improve the number of users finalizing the download by 18%, and we can reduce the time to download, with respect to a standard event-based handover benchmark scheme. Moreover, for the sake of generalization, we test the LSTM Autoencoder in a different scenario, where it maintains its performance improvements with a slight degradation, compared to the original scenario.