KU Leuven
Abstract:Utilizing the mmWave band can potentially achieve the high data rate needed for realistic and seamless interaction within a virtual reality (VR) application. To this end, beamforming in both the access point (AP) and head-mounted display (HMD) sides is necessary. The main challenge in this use case is the specific and highly dynamic user movement, which causes beam misalignment, degrading the received signal level and potentially leading to outages. This study examines mmWave-based coordinated multi-point networks for VR applications, where two or multiple APs cooperatively transmit the signals to an HMD for connectivity diversity. Instead of using omnireception, we propose dual-beam reception based on the analog beamforming at the HMD, enhancing the receive beamforming gain towards serving APs while achieving diversity. Evaluation using actual HMD movement data demonstrates the effectiveness of our approach, showcasing a reduction in outage rates of up to 13% compared to quasi-omnidirectional reception with two serving APs, and a 17% decrease compared to steerable single-beam reception with a serving AP. Widening the separation angle between two APs can further reduce outage rates due to head rotation as rotations can still be tracked using the steerable multi-beam, albeit at the expense of received signal levels reduction during the non-outage period.
Abstract:In this work, we propose a Multiple Access Control (MAC) protocol for Light-based IoT (LIoT) networks, where the gateway node orchestrates and schedules batteryless nodes duty-cycles based on their location and sleep time. The LIoT concept represents a sustainable solution for massive indoor IoT applications, offering an alternative communication medium through Visible Light Communication (VLC). While most existing scheduling algorithms for intermittent batteryless IoT aim to maximize data collection and enhance dataset size, our solution is tailored for environmental sensing applications, such as temperature, humidity, and air quality monitoring, optimizing measurement distribution and minimizing blind spots to achieve comprehensive and uniform environmental sensing. We propose a Balanced Space and Time-based Time Division Multiple Access scheduling (BST-TDMA) algorithm, which addresses environmental sensing challenges by balancing spatial and temporal factors to improve the environmental sensing efficiency of batteryless LIoT nodes. Our measurement-based results show that BST-TDMA was able to efficiently schedule duty-cycles with given intervals.
Abstract:This work describes the architecture and vision of designing and implementing a new test infrastructure for 6G physical layer research at KU Leuven. The Testbed is designed for physical layer research and experimentation following several emerging trends, such as cell-free networking, integrated communication, sensing, open disaggregated Radio Access Networks, AI-Native design, and multiband operation. The software is almost entirely based on free and open-source software, making contributing and reusing any component easy. The open Testbed is designed to provide real-time and labeled data on all parts of the physical layer, from raw IQ data to synchronization statistics, channel state information, or symbol/bit/packet error rates. Real-time labeled datasets can be collected by synchronizing the physical layer data logging with a positioning and motion capture system. One of the main goals of the design is to make it open and accessible to external users remotely. Most tests and data captures can easily be automated, and experiment code can be remotely deployed using standard containers (e.g., Docker or Podman). Finally, the paper describes how the Testbed can be used for our research on joint communication and sensing, over-the-air synchronization, distributed processing, and AI in the loop.
Abstract:Cell-free massive multiple-input multiple-output (CFmMIMO) is a paradigm that can improve users' spectral efficiency (SE) far beyond traditional cellular networks. Increased spatial diversity in CFmMIMO is achieved by spreading the antennas into small access points (APs), which cooperate to serve the users. Sequential fronthaul topologies in CFmMIMO, such as the daisy chain and multi-branch tree topology, have gained considerable attention recently. In such a processing architecture, each AP must store its received signal vector in the memory until it receives the relevant information from the previous AP in the sequence to refine the estimate of the users' signal vector in the uplink. In this paper, we adopt vector-wise and element-wise compression on the raw or pre-processed received signal vectors to store them in the memory. We investigate the impact of the limited memory capacity in the APs on the optimal number of APs. We show that with no memory constraint, having single-antenna APs is optimal, especially as the number of users grows. However, a limited memory at the APs restricts the depth of the sequential processing pipeline. Furthermore, we investigate the relation between the memory capacity at the APs and the rate of the fronthaul link.
Abstract:The sustainable design of Internet of Things (IoT) networks encompasses considerations related to energy efficiency and autonomy as well as considerations related to reliable communications, ensuring no energy is wasted on undelivered data. Under these considerations, this work proposes the design and implementation of energy-efficient Bluetooth Low Energy (BLE) and Light-based IoT (LIoT) batteryless IoT sensor nodes powered by an indoor light Energy Harvesting Unit (EHU). Our design intends to integrate these nodes into a sensing network to improve its reliability by combining both technologies and taking advantage of their features. The nodes incorporate state-of-the-art components, such as low-power sensors and efficient System-on-Chips (SoCs). Moreover, we design a strategy for adaptive switching between active and sleep cycles as a function of the available energy, allowing the IoT nodes to continuously operate without batteries. Our results show that by adapting the duty cycle of the BLE and LIoT nodes depending on the environment's light intensity, we can ensure a continuous and reliable node operation. In particular, measurements show that our proposed BLE and LIoT node designs are able to communicate with an IoT gateway in a bidirectional way, every 19.3 and 624.6 seconds, respectively, in an energy-autonomous and reliable manner.
Abstract:In Frequency Modulated Continuous Waveform (FMCW) radar systems, the phase noise from the Phase-Locked Loop (PLL) can increase the noise floor in the Range-Doppler map. The adverse effects of phase noise on close targets can be mitigated if the transmitter (Tx) and receiver (Rx) employ the same chirp, a phenomenon known as the range correlation effect. In the context of a multi-static radar network, sharing the chirp between distant radars becomes challenging. Each radar generates its own chirp, leading to uncorrelated phase noise. Consequently, the system performance cannot benefit from the range correlation effect. Previous studies show that selecting a suitable code sequence for a Phase Modulated Continuous Waveform (PMCW) radar can reduce the impact of uncorrelated phase noise in the range dimension. In this paper, we demonstrate how to leverage this property to exploit both the mono- and multi-static signals of each radar in the network without having to share any signal at the carrier frequency. The paper introduces a detailed signal model for PMCW radar networks, analyzing both correlated and uncorrelated phase noise effects in the Doppler dimension. Additionally, a solution for compensating uncorrelated phase noise in Doppler is presented and supported by numerical results.
Abstract:Deep learning (DL) techniques are increasingly pervasive across various domains, including wireless communication, where they extract insights from raw radio signals. However, the computational demands of DL pose significant challenges, particularly in distributed wireless networks like Cell-free networks, where deploying DL models on edge devices becomes hard due to heightened computational loads. These computational loads escalate with larger input sizes, often correlating with improved model performance. To mitigate this challenge, Early Exiting (EE) techniques have been introduced in DL, primarily targeting the depth of the model. This approach enables models to exit during inference based on specified criteria, leveraging entropy measures at intermediate exits. Doing so makes less complex samples exit early, reducing computational load and inference time. In our contribution, we propose a novel width-wise exiting strategy for Convolutional Neural Network (CNN)-based architectures. By selectively adjusting the input size, we aim to regulate computational demands effectively. Our approach aims to decrease the average computational load during inference while maintaining performance levels comparable to conventional models. We specifically investigate Modulation Classification, a well-established application of DL in wireless communication. Our experimental results show substantial reductions in computational load, with an average decrease of 28%, and particularly notable reductions of 65% in high-SNR scenarios. Through this work, we present a practical solution for reducing computational demands in deep learning applications, particularly within the domain of wireless communication.
Abstract:Fronthaul quantization causes a significant distortion in cell-free massive MIMO networks. Due to the limited capacity of fronthaul links, information exchange among access points (APs) must be quantized significantly. Furthermore, the complexity of the multiplication operation in the base-band processing unit increases with the number of bits of the operands. Thus, quantizing the APs' signal vector reduces the complexity of signal estimation in the base-band processing unit. Most recent works consider the direct quantization of the received signal vectors at each AP without any pre-processing. However, the signal vectors received at different APs are correlated mutually (inter-AP correlation) and also have correlated dimensions (intra-AP correlation). Hence, cooperative quantization of APs fronthaul can help to efficiently use the quantization bits at each AP and further reduce the distortion imposed on the quantized vector at the APs. This paper considers a daisy chain fronthaul and three different processing sequences at each AP. We show that 1) de-correlating the received signal vector at each AP from the corresponding vectors of the previous APs (inter-AP de-correlation) and 2) de-correlating the dimensions of the received signal vector at each AP (intra-AP de-correlation) before quantization helps to use the quantization bits at each AP more efficiently than directly quantizing the received signal vector without any pre-processing and consequently, improves the bit error rate (BER) and normalized mean square error (NMSE) of users signal estimation.
Abstract:Cell-Free Massive MIMO (CF mMIMO) has emerged as a potential enabler for future networks. It has been shown that these networks are much more energy-efficient than classical cellular systems when they are serving users at peak capacity. However, these CF mMIMO networks are designed for peak traffic loads, and when this is not the case, they are significantly over-dimensioned and not at all energy efficient. To this end, Adaptive Access Point (AP) ON/OFF Switching (ASO) strategies have been developed to save energy when the network is not at peak traffic loads by putting unnecessary APs to sleep. Unfortunately, the existing strategies rely on measuring channel state information between every user and every access point, resulting in significant measurement energy consumption overheads. Furthermore, the current state-of-art approach has a computational complexity that scales exponentially with the number of APs. In this work, we present a novel convex feasibility testing method that allows checking per-user Quality-of-Service (QoS) requirements without necessarily considering all possible access point activations. We then propose an iterative algorithm for activating access points until all users' requirements are fulfilled. We show that our method has comparable performance to the optimal solution whilst avoiding solving costly mixed-integer problems and measuring channel state information on only a limited subset of APs.
Abstract:Galileo is the first global navigation satellite system to authenticate their civilian signals through the Open Service Galileo Message Authentication (OSNMA) protocol. However, OSNMA delays the time to obtain a first position and time fix, the so-called Time To First Authentication Fix (TTFAF). Reducing the TTFAF as much as possible is crucial to integrate the technology seamlessly into the current products. In the cases where the receiver already has cryptographic data available, the so-called hot start mode and focus of this article, the currently available implementations achieve an average TTFAF of around 100 seconds in ideal environments. In this work, we dissect the TTFAF process, propose two main optimizations to reduce the TTFAF, and benchmark them in three distinct scenarios (open-sky, soft urban, and hard urban) with recorded real data. Moreover, we evaluate the optimizations using the synthetic scenario from the official OSNMA test vectors. The first block of optimizations centers on extracting as much information as possible from broken sub-frames by processing them at page level and combining redundant data from multiple satellites. The second block of optimizations aims to reconstruct missed navigation data by using fields in the authentication tags belonging to the same sub-frame as the authentication key. Combining both optimizations improves the TTFAF substantially for all considered scenarios. We obtain an average TTFAF of 60.9 and 68.8 seconds for the test vectors and the open-sky scenario, respectively, with a best-case of 44.0 seconds in both. Likewise, the urban scenarios see a drastic reduction of the average TTFAF between the non-optimized and optimized cases, from 127.5 to 87.5 seconds in the soft urban scenario and from 266.1 to 146.1 seconds in the hard urban scenario. These optimizations are available as part of the open-source OSNMAlib library on GitHub.