Abstract:During the last few years, the use of Unmanned Aerial Vehicles (UAVs) equipped with sensors and cameras has emerged as a cutting-edge technology to provide services such as surveillance, infrastructure inspections, and target acquisition. However, this approach requires UAVs to process data onboard, mainly for person/object detection and recognition, which may pose significant energy constraints as UAVs are battery-powered. A possible solution can be the support of Non-Terrestrial Networks (NTNs) for edge computing. In particular, UAVs can partially offload data (e.g., video acquisitions from onboard sensors) to more powerful upstream High Altitude Platforms (HAPs) or satellites acting as edge computing servers to increase the battery autonomy compared to local processing, even though at the expense of some data transmission delays. Accordingly, in this study we model the energy consumption of UAVs, HAPs, and satellites considering the energy for data processing, offloading, and hovering. Then, we investigate whether data offloading can improve the system performance. Simulations demonstrate that edge computing can improve both UAV autonomy and end-to-end delay compared to onboard processing in many configurations.
Abstract:Non-Terrestrial Networks (NTNs) are expected to be a key component of 6th generation (6G) networks to support broadband seamless Internet connectivity and expand the coverage even in rural and remote areas. In this context, High Altitude Platforms (HAPs) can act as edge servers to process computational tasks offloaded by energy-constrained terrestrial devices such as Internet of Things (IoT) sensors and ground vehicles (GVs). In this paper, we analyze the opportunity to support Vehicular Edge Computing (VEC) via HAP in a rural scenario where GVs can decide whether to process data onboard or offload them to a HAP. We characterize the system as a set of queues in which computational tasks arrive according to a Poisson arrival process. Then, we assess the optimal VEC offloading factor to maximize the probability of real-time service, given latency and computational capacity constraints.