Abstract:Federated Learning (FL) is a promising machine learning approach for Internet of Things (IoT), but it has to address network congestion problems when the population of IoT devices grows. Hierarchical FL (HFL) alleviates this issue by distributing model aggregation to multiple edge servers. Nevertheless, the challenge of communication overhead remains, especially in scenarios where all IoT devices simultaneously join the training process. For scalability, practical HFL schemes select a subset of IoT devices to participate in the training, hence the notion of device scheduling. In this setting, only selected IoT devices are scheduled to participate in the global training, with each of them being assigned to one edge server. Existing HFL assignment methods are primarily based on search mechanisms, which suffer from high latency in finding the optimal assignment. This paper proposes an improved K-Center algorithm for device scheduling and introduces a deep reinforcement learning-based approach for assigning IoT devices to edge servers. Experiments show that scheduling 50% of IoT devices is generally adequate for achieving convergence in HFL with much lower time delay and energy consumption. In cases where reduction in energy consumption (such as in Green AI) and reduction of messages (to avoid burst traffic) are key objectives, scheduling 30% IoT devices allows a substantial reduction in energy and messages with similar model accuracy.
Abstract:The large population of wireless users is a key driver of data-crowdsourced Machine Learning (ML). However, data privacy remains a significant concern. Federated Learning (FL) encourages data sharing in ML without requiring data to leave users' devices but imposes heavy computation and communications overheads on mobile devices. Hierarchical FL (HFL) alleviates this problem by performing partial model aggregation at edge servers. HFL can effectively reduce energy consumption and latency through effective resource allocation and appropriate user assignment. Nevertheless, resource allocation in HFL involves optimizing multiple variables, and the objective function should consider both energy consumption and latency, making the development of resource allocation algorithms very complicated. Moreover, it is challenging to perform user assignment, which is a combinatorial optimization problem in a large search space. This article proposes a spectrum resource optimization algorithm (SROA) and a two-stage iterative algorithm (TSIA) for HFL. Given an arbitrary user assignment pattern, SROA optimizes CPU frequency, transmit power, and bandwidth to minimize system cost. TSIA aims to find a user assignment pattern that considerably reduces the total system cost. Experimental results demonstrate the superiority of the proposed HFL framework over existing studies in energy and latency reduction.
Abstract:Machine learning (ML) is a widely accepted means for supporting customized services for mobile devices and applications. Federated Learning (FL), which is a promising approach to implement machine learning while addressing data privacy concerns, typically involves a large number of wireless mobile devices to collect model training data. Under such circumstances, FL is expected to meet stringent training latency requirements in the face of limited resources such as demand for wireless bandwidth, power consumption, and computation constraints of participating devices. Due to practical considerations, FL selects a portion of devices to participate in the model training process at each iteration. Therefore, the tasks of efficient resource management and device selection will have a significant impact on the practical uses of FL. In this paper, we propose a spectrum allocation optimization mechanism for enhancing FL over a wireless mobile network. Specifically, the proposed spectrum allocation optimization mechanism minimizes the time delay of FL while considering the energy consumption of individual participating devices; thus ensuring that all the participating devices have sufficient resources to train their local models. In this connection, to ensure fast convergence of FL, a robust device selection is also proposed to help FL reach convergence swiftly, especially when the local datasets of the devices are not independent and identically distributed (non-iid). Experimental results show that (1) the proposed spectrum allocation optimization method optimizes time delay while satisfying the individual energy constraints; (2) the proposed device selection method enables FL to achieve the fastest convergence on non-iid datasets.
Abstract:Deep Neural Networks (DNNs) have been widely applied in Internet of Things (IoT) systems for various tasks such as image classification and object detection. However, heavyweight DNN models can hardly be deployed on edge devices due to limited computational resources. In this paper, an edge-cloud cooperation framework is proposed to improve inference accuracy while maintaining low inference latency. To this end, we deploy a lightweight model on the edge and a heavyweight model on the cloud. A reinforcement learning (RL)-based DNN compression approach is used to generate the lightweight model suitable for the edge from the heavyweight model. Moreover, a supervised learning (SL)-based offloading strategy is applied to determine whether the sample should be processed on the edge or on the cloud. Our method is implemented on real hardware and tested on multiple datasets. The experimental results show that (1) The sizes of the lightweight models obtained by RL-based DNN compression are up to 87.6% smaller than those obtained by the baseline method; (2) SL-based offloading strategy makes correct offloading decisions in most cases; (3) Our method reduces up to 78.8% inference latency and achieves higher accuracy compared with the cloud-only strategy.
Abstract:While fifth-generation (5G) communications are being rolled out worldwide, sixth-generation (6G) communications have attracted much attention from both the industry and the academia. Compared with 5G, 6G will have a wider frequency band, higher transmission rate, spectrum efficiency, greater connection capacity, shorter delay, wider coverage, and stronger anti-interference capability to satisfy various network requirements. In this paper, we present a survey of potential essential technologies in 6G. In particular, we will give an insightful understanding of the paradigms and applications of the future 6G wireless communications by introducing index modulation (IM), artificial intelligence (AI), intelligent reflecting surfaces (IRS), simultaneous wireless information and power transfer (SWIPT), space-air-ground-sea integrated network (SAGSIN), terahertz (THz), visible light communications (VLC), blockchain-enabled wireless network, holographic radio, full-duplex technology (FD), Cell-Free Massive MIMO (CFmMM), and security and privacy problems behind technologies mentioned above.
Abstract:Greenhouse environment is the key to influence crops production. However, it is difficult for classical control methods to give precise environment setpoints, such as temperature, humidity, light intensity and carbon dioxide concentration for greenhouse because it is uncertain nonlinear system. Therefore, an intelligent close loop control framework based on model embedded deep reinforcement learning (MEDRL) is designed for greenhouse environment control. Specifically, computer vision algorithms are used to recognize growing periods and sex of crops, followed by the crop growth models, which can be trained with different growing periods and sex. These model outputs combined with the cost factor provide the setpoints for greenhouse and feedback to the control system in real-time. The whole MEDRL system has capability to conduct optimization control precisely and conveniently, and costs will be greatly reduced compared with traditional greenhouse control approaches.
Abstract:At high latitudes, many cities adopt a centralized heating system to improve the energy generation efficiency and to reduce pollution. In multi-tier systems, so-called district heating, there are a few efficient approaches for the flow rate control during the heating process. In this paper, we describe the theoretical methods to solve this problem by deep reinforcement learning and propose a cloud-based heating control system for implementation. A real-world case study shows the effectiveness and practicability of the proposed system controlled by humans, and the simulated experiments for deep reinforcement learning show about 1985.01 gigajoules of heat quantity and 42276.45 tons of water are saved per hour compared with manual control.