Abstract:Federated Learning (FL) has emerged as a fundamental learning paradigm to harness massive data scattered at geo-distributed edge devices in a privacy-preserving way. Given the heterogeneous deployment of edge devices, however, their data are usually Non-IID, introducing significant challenges to FL including degraded training accuracy, intensive communication costs, and high computing complexity. Towards that, traditional approaches typically utilize adaptive mechanisms, which may suffer from scalability issues, increased computational overhead, and limited adaptability to diverse edge environments. To address that, this paper instead leverages the observation that the computation offloading involves inherent functionalities such as node matching and service correlation to achieve data reshaping and proposes Federated learning based on computing Offloading (FlocOff) framework, to address data heterogeneity and resource-constrained challenges. Specifically, FlocOff formulates the FL process with Non-IID data in edge scenarios and derives rigorous analysis on the impact of imbalanced data distribution. Based on this, FlocOff decouples the optimization in two steps, namely : (1) Minimizes the Kullback-Leibler (KL) divergence via Computation Offloading scheduling (MKL-CO); (2) Minimizes the Communication Cost through Resource Allocation (MCC-RA). Extensive experimental results demonstrate that the proposed FlocOff effectively improves model convergence and accuracy by 14.3\%-32.7\% while reducing data heterogeneity under various data distributions.
Abstract:Satellite Internet of Things (IoT) is to use satellites as the access points for IoT devices to achieve the global coverage of future IoT systems, and is expected to support burgeoning IoT applications, including communication, sensing, and computing. However, the complex and dynamic satellite environments and limited network resources raise new challenges in the design of satellite IoT systems. In this article, we focus on the joint design of communication, sensing, and computing to improve the performance of satellite IoT, which is quite different from the case of terrestrial IoT systems. We describe how the integration of the three functions can enhance system capabilities, and summarize the state-of-the-art solutions. Furthermore, we discuss the main challenges of integrating communication, sensing, and computing in satellite IoT to be solved with pressing interest.
Abstract:It is of paramount importance to achieve efficient data collection in the Internet of Things (IoT). Due to the inherent structural properties (e.g., sparsity) existing in many signals of interest, compressive sensing (CS) technology has been extensively used for data collection in IoT to improve both accuracy and energy efficiency. Apart from the existing works which leverage CS as a channel coding scheme to deal with data loss during transmission, some recent results have started to employ CS as a source coding strategy. The frequently used projection matrices in these CS-based source coding schemes include dense random matrices (e.g., Gaussian matrices or Bernoulli matrices) and structured matrices (e.g., Toeplitz matrices). However, these matrices are either difficult to be implemented on resource-constrained IoT sensor nodes or have limited applicability. To address these issues, in this paper, we design a novel simple and efficient projection matrix, named sparse Gaussian matrix, which is easy and resource-saving to be implemented in practical IoT applications. We conduct both theoretical analysis and experimental evaluation of the designed sparse Gaussian matrix. The results demonstrate that employing the designed projection matrix to perform CS-based source coding could significantly save time and memory cost while ensuring satisfactory signal recovery performance.