Abstract:Energy efficiency and information freshness are key requirements for sensor nodes serving Industrial Internet of Things (IIoT) applications, where a sink node collects informative and fresh data before a deadline, e.g., to control an external actuator. Content-based wake-up (CoWu) activates a subset of nodes that hold data relevant for the sink's goal, thereby offering an energy-efficient way to attain objectives related to information freshness. This paper focuses on a scenario where the sink collects fresh information on top-k values, defined as data from the nodes observing the k highest readings at the deadline. We introduce a new metric called top-k Query Age of Information (k-QAoI), which allows us to characterize the performance of CoWu by considering the characteristics of the physical process. Further, we show how to select the CoWu parameters, such as its timing and threshold, to attain both information freshness and energy efficiency. The numerical results reveal the effectiveness of the CoWu approach, which is able to collect top-k data with higher energy efficiency while reducing k-QAoI when compared to round-robin scheduling, especially when the number of nodes is large and the required size of k is small.
Abstract:The integration of Non-Terrestrial Networks (NTNs) with Low Earth Orbit (LEO) satellite constellations into 5G and Beyond is essential to achieve truly global connectivity. A distinctive characteristic of LEO mega-constellations is that they constitute a global infrastructure with predictable dynamics, which enables the pre-planned allocation of the radio resources. However, the different bands that can be used for ground-to-satellite communication are affected differently by atmospheric conditions such as precipitation, which introduces uncertainty on the attenuation of the communication links at high frequencies. Based on this, we present a compelling case for applying integrated sensing and communications (ISAC) in heterogeneous and multi-layer LEO satellite constellations over wide areas. Specifically, we present an ISAC framework and frame structure to accurately estimate the attenuation in the communication links due to precipitation, with the aim of finding the optimal serving satellites and resource allocation for downlink communication with users on ground. The results show that, by dedicating an adequate amount of resources for sensing and solving the association and resource allocation problems jointly, it is feasible to increase the average throughput by 59% and the fairness by 600% when compared to solving these problems separately.
Abstract:This paper introduces a full solution for decentralized routing in Low Earth Orbit satellite constellations based on continual Deep Reinforcement Learning (DRL). This requires addressing multiple challenges, including the partial knowledge at the satellites and their continuous movement, and the time-varying sources of uncertainty in the system, such as traffic, communication links, or communication buffers. We follow a multi-agent approach, where each satellite acts as an independent decision-making agent, while acquiring a limited knowledge of the environment based on the feedback received from the nearby agents. The solution is divided into two phases. First, an offline learning phase relies on decentralized decisions and a global Deep Neural Network (DNN) trained with global experiences. Then, the online phase with local, on-board, and pre-trained DNNs requires continual learning to evolve with the environment, which can be done in two different ways: (1) Model anticipation, where the predictable conditions of the constellation are exploited by each satellite sharing local model with the next satellite; and (2) Federated Learning (FL), where each agent's model is merged first at the cluster level and then aggregated in a global Parameter Server. The results show that, without high congestion, the proposed Multi-Agent DRL framework achieves the same E2E performance as a shortest-path solution, but the latter assumes intensive communication overhead for real-time network-wise knowledge of the system at a centralized node, whereas ours only requires limited feedback exchange among first neighbour satellites. Importantly, our solution adapts well to congestion conditions and exploits less loaded paths. Moreover, the divergence of models over time is easily tackled by the synergy between anticipation, applied in short-term alignment, and FL, utilized for long-term alignment.
Abstract:The traditional role of the network layer is the transfer of packet replicas from source to destination through intermediate network nodes. We present a generative network layer that uses Generative AI (GenAI) at intermediate or edge network nodes and analyze its impact on the required data rates in the network. We conduct a case study where the GenAI-aided nodes generate images from prompts that consist of substantially compressed latent representations. The results from network flow analyses under image quality constraints show that the generative network layer can achieve an improvement of more than 100% in terms of the required data rate.
Abstract:The amount of data generated by Earth observation satellites can be enormous, which poses a great challenge to the satellite-to-ground connections with limited rate. This paper considers problem of efficient downlink communication of multi-spectral satellite images for Earth observation using change detection. The proposed method for image processing consists of the joint design of cloud removal and change encoding, which can be seen as an instance of semantic communication, as it encodes important information, such as changed multi-spectral pixels (MPs), while aiming to minimize energy consumption. It comprises a three-stage end-to-end scoring mechanism that determines the importance of each MP before deciding its transmission. Specifically, the sensing image is (1) standardized, (2) passed through a high-performance cloud filtering via the Cloud-Net model, and (3) passed to the proposed scoring algorithm that uses Change-Net to identify MPs that have a high likelihood of being changed, compress them and forward the result to the ground station. The experimental results indicate that the proposed framework is effective in optimizing energy usage while preserving high-quality data transmission in satellite-based Earth observation applications.
Abstract:The integration of Low Earth Orbit (LEO) satellite constellations into 5G and Beyond is essential to achieve efficient global connectivity. As LEO satellites are a global infrastructure with predictable dynamics, a pre-planned fair and load-balanced allocation of the radio resources to provide efficient downlink connectivity over large areas is an achievable goal. In this paper, we propose a distributed and a global optimal algorithm for satellite-to-cell resource allocation with multiple beams. These algorithms aim to achieve a fair allocation of time-frequency resources and beams to the cells based on the number of users in connected mode (i.e., registered). Our analyses focus on evaluating the trade-offs between average per-user throughput, fairness, number of cell handovers, and computational complexity in a downlink scenario with fixed cells, where the number of users is extracted from a population map. Our results show that both algorithms achieve a similar average per-user throughput. However, the global optimal algorithm achieves a fairness index over 0.9 in all cases, which is more than twice that of the distributed algorithm. Furthermore, by correctly setting the handover cost parameter, the number of handovers can be effectively reduced by more than 70% with respect to the case where the handover cost is not considered.
Abstract:The widespread adoption of Reconfigurable Intelligent Surfaces (RISs) in future practical wireless systems is critically dependent on the design and implementation of efficient access protocols, an issue that has received less attention in the research literature. In this paper, we propose a grant-free random access (RA) protocol for a RIS-assisted wireless communication setting, where a massive number of users' equipment (UEs) try to access an access point (AP). The proposed protocol relies on a channel oracle, which enables the UEs to infer the best RIS configurations that provide opportunistic access to UEs. The inference is based on a model created during a training phase with a greatly reduced set of RIS configurations. Specifically, we consider a system whose operation is divided into three blocks: i) a downlink training block, which trains the model used by the oracle, ii) an uplink access block, where the oracle infers the best access slots, and iii) a downlink acknowledgment block, which provides feedback to the UEs that were successfully decoded by the AP during access. Numerical results show that the proper integration of the RIS into the protocol design is able to increase the expected end-to-end throughput by approximately 40% regarding the regular repetition slotted ALOHA protocol.
Abstract:Distributed machine learning (DML) results from the synergy between machine learning and connectivity. Federated learning (FL) is a prominent instance of DML in which intermittently connected mobile clients contribute to the training of a common learning model. This paper presents the new context brought to FL by satellite constellations where the connectivity patterns are significantly different from the ones assumed in terrestrial FL. We provide a taxonomy of different types of satellite connectivity relevant for FL and show how the distributed training process can overcome the slow convergence due to long offline times of clients by taking advantage of the predictable intermittency of the satellite communication links.
Abstract:Non-geostationary orbit (NGSO) satellite constellations represent a cornerstone in the NewSpace paradigm and thus have become one of the hottest topics for the industry, academia, but also for national space agencies and regulators. For instance, numerous companies worldwide, including Starlink, OneWeb, Kepler, SPUTNIX, and Amazon have started or will soon start to deploy their own NGSO constellations, which aim to provide either broadband or IoT services. One of the major drivers for such a high interest on NGSO constellations is that, with an appropriate design, they are capable of providing global coverage and connectivity.
Abstract:Reconfigurable intelligent surfaces (RISs) are arrays of passive elements that can control the reflection of the incident electromagnetic waves. While RIS are particularly useful to avoid blockages, the protocol aspects for their implementation have been largely overlooked. In this paper, we devise a random access protocol for a RIS-assisted wireless communication setting. Rather than tailoring RIS reflections to meet the positions of users equipment (UEs), our protocol relies on a finite set of RIS configurations designed to cover the area of interest. The protocol is comprised of a downlink training phase followed by an uplink access phase. During these phases, a base station (BS) controls the RIS to sweep over its configurations. The UEs then receive training signals to measure the channel quality with the different RIS configurations and refine their access policies. Numerical results show that our protocol increases the average number of successful access attempts; however, at the expense of increased access delay due to the realization of a training period. Promising results are further observed in scenarios with a high access load.