Abstract:This paper focuses on enhancing the energy efficiency (EE) of a cooperative network featuring a `miniature' unmanned aerial vehicle (UAV) that operates at terahertz (THz) frequencies, utilizing holographic surfaces to improve the network's performance. Unlike traditional reconfigurable intelligent surfaces (RIS) that are typically used as passive relays to adjust signal reflections, this work introduces a novel concept: Energy harvesting (EH) using reconfigurable holographic surfaces (RHS) mounted on the miniature UAV. In this system, a source node facilitates the simultaneous reception of information and energy signals by the UAV, with the harvested energy from the RHS being used by the UAV to transmit data to a specific destination. The EE optimization involves adjusting non-orthogonal multiple access (NOMA) power coefficients and the UAV's flight path, considering the peculiarities of the THz channel. The optimization problem is solved in two steps. Initially, the trajectory is refined using a successive convex approximation (SCA) method, followed by the adjustment of NOMA power coefficients through a quadratic transform technique. The effectiveness of the proposed algorithm is demonstrated through simulations, showing superior results when compared to baseline methods.
Abstract:Achieving high-quality wireless interactive Extended Reality (XR) will require multi-gigabit throughput at extremely low latency. The Millimeter-Wave (mmWave) frequency bands, between 24 and 300GHz, can achieve such extreme performance. However, maintaining a consistently high Quality of Experience with highly mobile users is challenging, as mmWave communications are inherently directional. In this work, we present and evaluate an end-to-end approach to such a mmWave-based mobile XR system. We perform a highly realistic simulation of the system, incorporating accurate XR data traffic, detailed mmWave propagation models and actual user motion. We evaluate the impact of the beamforming strategy and frequency on the overall performance. In addition, we provide the first system-level evaluation of the CoVRage algorithm, a proactive and spatially aware user-side beamforming approach designed specifically for highly mobile XR environments.
Abstract:Low-cost, resource-constrained, maintenance-free, and energy-harvesting (EH) Internet of Things (IoT) devices, referred to as zero-energy devices (ZEDs), are rapidly attracting attention from industry and academia due to their myriad of applications. To date, such devices remain primarily unsupported by modern IoT connectivity solutions due to their intrinsic fabrication, hardware, deployment, and operation limitations, while lacking clarity on their key technical enablers and prospects. Herein, we address this by discussing the main characteristics and enabling technologies of ZEDs within the next generation of mobile networks, specifically focusing on unconventional EH sources, multi-source EH, power management, energy storage solutions, manufacturing material and practices, backscattering, and low-complexity receivers. Moreover, we highlight the need for lightweight and energy-aware computing, communication, and scheduling protocols, while discussing potential approaches related to TinyML, duty cycling, and infrastructure enablers like radio frequency wireless power transfer and wake-up protocols. Challenging aspects and open research directions are identified and discussed in all the cases. Finally, we showcase an experimental ZED proof-of-concept related to ambient cellular backscattering.
Abstract:From the outset, batteries have been the main power source for the Internet of Things (IoT). However, replacing and disposing of billions of dead batteries per year is costly in terms of maintenance and ecologically irresponsible. Since batteries are one of the greatest threats to a sustainable IoT, battery-less devices are the solution to this problem. These devices run on long-lived capacitors charged using various forms of energy harvesting, which results in intermittent on-off device behaviour. In this work, we model this intermittent battery-less behaviour for LoRaWAN devices. This model allows us to characterize the performance with the aim to determine under which conditions a LoRaWAN device can work without batteries, and how its parameters should be configured. Results show that the reliability directly depends on device configurations (i.e., capacitor size, turn-on voltage threshold), application behaviour (i.e., transmission interval, packet size) and environmental conditions (i.e., energy harvesting rate).
Abstract:This paper proposes an energy efficient resource allocation design algorithm for an intelligent reflecting surface (IRS)-assisted downlink ultra-reliable low-latency communication (URLLC) network. This setup features a multi-antenna base station (BS) transmitting data traffic to a group of URLLC users with short packet lengths. We maximize the total network's energy efficiency (EE) through the optimization of active beamformers at the BS and passive beamformers (a.k.a. phase shifts) at the IRS. The main non-convex problem is divided into two sub-problems. An alternating optimization (AO) approach is then used to solve the problem. Through the use of the successive convex approximation (SCA) with a novel iterative rank relaxation method, we construct a concave-convex objective function for each sub-problem. The first sub-problem is a fractional program that is solved using the Dinkelbach method and a penalty-based approach. The second sub-problem is then solved based on semi-definite programming (SDP) and the penalty-based approach. The iterative solution gradually approaches the rank-one for both the active beamforming and unit modulus IRS phase-shift sub-problems. Our results demonstrate the efficacy of the proposed solution compared to existing benchmarks.
Abstract:This paper considers the energy efficiency (EE) maximization of a simultaneous wireless information and power transfer (SWIPT)-assisted unmanned aerial vehicles (UAV) cooperative network operating at TeraHertz (THz) frequencies. The source performs SWIPT enabling the UAV to receive both power and information while also transmitting the information to a designated destination node. Subsequently, the UAV utilizes the harvested energy to relay the data to the intended destination node effectively. Specifically, we maximize EE by optimizing the non-orthogonal multiple access (NOMA) power allocation coefficients, SWIPT power splitting (PS) ratio, and UAV trajectory. The main problem is broken down into a two-stage optimization problem and solved using an alternating optimization approach. In the first stage, optimization of the PS ratio and trajectory is performed by employing successive convex approximation using a lower bound on the exponential factor in the THz channel model. In the second phase, the NOMA power coefficients are optimized using a quadratic transform approach. Numerical results demonstrate the effectiveness of our proposed resource allocation algorithm compared to the baselines where there is no trajectory optimization or no NOMA power or PS optimization.
Abstract:In recent years, channel state information (CSI) at sub-6 GHz has been widely exploited for Wi-Fi sensing, particularly for activity and gesture recognition. In this work, we instead explore mmWave (60 GHz) Wi-Fi signals for gesture recognition/pose estimation. Our focus is on the mmWave Wi-Fi signals so that they can be used not only for high data rate communication but also for improved sensing e.g., for extended reality (XR) applications. For this reason, we extract spatial beam signal-to-noise ratios (SNRs) from the periodic beam training employed by IEEE 802.11ad devices. We consider a set of 10 gestures/poses motivated by XR applications. We conduct experiments in two environments and with three people.As a comparison, we also collect CSI from IEEE 802.11ac devices. To extract features from the CSI and the beam SNR, we leverage a deep neural network (DNN). The DNN classifier achieves promising results on the beam SNR task with state-of-the-art 96.7% accuracy in a single environment, even with a limited dataset. We also investigate the robustness of the beam SNR against CSI across different environments. Our experiments reveal that features from the CSI generalize without additional re-training, while those from beam SNRs do not. Therefore, re-training is required in the latter case.
Abstract:The advancement of Virtual Reality (VR) technology is focused on improving its immersiveness, supporting multiuser Virtual Experiences (VEs), and enabling the users to move freely within their VEs while still being confined within specialized VR setups through Redirected Walking (RDW). To meet their extreme data-rate and latency requirements, future VR systems will require supporting wireless networking infrastructures operating in millimeter Wave (mmWave) frequencies that leverage highly directional communication in both transmission and reception through beamforming and beamsteering. We propose the use of predictive context-awareness to optimize transmitter and receiver-side beamforming and beamsteering. By predicting users' short-term lateral movements in multiuser VR setups with Redirected Walking (RDW), transmitter-side beamforming and beamsteering can be optimized through Line-of-Sight (LoS) "tracking" in the users' directions. At the same time, predictions of short-term orientational movements can be utilized for receiver-side beamforming for coverage flexibility enhancements. We target two open problems in predicting these two context information instances: i) predicting lateral movements in multiuser VR settings with RDW, and ii) generating synthetic head rotation datasets for training orientational movements predictors. Our experimental results demonstrate that Long Short-Term Memory (LSTM) networks feature promising accuracy in predicting lateral movements, and context-awareness stemming from VEs further enhances this accuracy. Additionally, we show that a TimeGAN-based approach for orientational data generation can create synthetic samples that closely match experimentally obtained ones.
Abstract:Contemporary Virtual Reality (VR) setups often include an external source delivering content to a Head-Mounted Display (HMD). "Cutting the wire" in such setups and going truly wireless will require a wireless network capable of delivering enormous amounts of video data at an extremely low latency. The massive bandwidth of higher frequencies, such as the millimeter-wave (mmWave) band, can meet these requirements. Due to high attenuation and path loss in the mmWave frequencies, beamforming is essential. In wireless VR, where the antenna is integrated into the HMD, any head rotation also changes the antenna's orientation. As such, beamforming must adapt, in real-time, to the user's head rotations. An HMD's built-in sensors providing accurate orientation estimates may facilitate such rapid beamforming. In this work, we present coVRage, a receive-side beamforming solution tailored for VR HMDs. Using built-in orientation prediction present on modern HMDs, the algorithm estimates how the Angle of Arrival (AoA) at the HMD will change in the near future, and covers this AoA trajectory with a dynamically shaped oblong beam, synthesized using sub-arrays. We show that this solution can cover these trajectories with consistently high gain, even in light of temporally or spatially inaccurate orientational data.
Abstract:Full-immersive multiuser Virtual Reality (VR) envisions supporting unconstrained mobility of the users in the virtual worlds, while at the same time constraining their physical movements inside VR setups through redirected walking. For enabling delivery of high data rate video content in real-time, the supporting wireless networks will leverage highly directional communication links that will "track" the users for maintaining the Line-of-Sight (LoS) connectivity. Recurrent Neural Networks (RNNs) and in particular Long Short-Term Memory (LSTM) networks have historically presented themselves as a suitable candidate for near-term movement trajectory prediction for natural human mobility, and have also recently been shown as applicable in predicting VR users' mobility under the constraints of redirected walking. In this work, we extend these initial findings by showing that Gated Recurrent Unit (GRU) networks, another candidate from the RNN family, generally outperform the traditionally utilized LSTMs. Second, we show that context from a virtual world can enhance the accuracy of the prediction if used as an additional input feature in comparison to the more traditional utilization of solely the historical physical movements of the VR users. Finally, we show that the prediction system trained on a static number of coexisting VR users be scaled to a multi-user system without significant accuracy degradation.