Abstract:Novel reconfigurable intelligent surface (RIS) architectures, known as beyond diagonal RISs (BD-RISs), have been proposed to enhance reflection efficiency and expand RIS capabilities. However, their passive nature, non-diagonal reflection matrix, and the large number of coupled reflecting elements complicate the channel state information (CSI) estimation process. The challenge further escalates in scenarios with fast-varying channels. In this paper, we address this challenge by proposing novel joint channel estimation and prediction strategies with low overhead and high accuracy for two different RIS architectures in a BD-RIS-assisted multiple-input multiple-output system under correlated fast-fading environments with channel aging. The channel estimation procedure utilizes the Tucker2 decomposition with bilinear alternative least squares, which is exploited to decompose the cascade channels of the BD-RIS-assisted system into effective channels of reduced dimension. The channel prediction framework is based on a convolutional neural network combined with an autoregressive predictor. The estimated/predicted CSI is then utilized to optimize the RIS phase shifts aiming at the maximization of the downlink sum rate. Insightful simulation results demonstrate that our proposed approach is robust to channel aging, and exhibits a high estimation accuracy. Moreover, our scheme can deliver a high average downlink sum rate, outperforming other state-of-the-art channel estimation methods. The results also reveal a remarkable reduction in pilot overhead of up to 98\% compared to baseline schemes, all imposing low computational complexity.
Abstract:We evaluate the performance of the LoRaWAN Long-Range Frequency Hopping Spread Spectrum (LR-FHSS) technique using a device-level probabilistic strategy for code rate and header replica allocation. Specifically, we investigate the effects of different header replica and code rate allocations at each end-device, guided by a probability distribution provided by the network server. As a benchmark, we compare the proposed strategy with the standardized LR-FHSS data rates DR8 and DR9. Our numerical results demonstrate that the proposed strategy consistently outperforms the DR8 and DR9 standard data rates across all considered scenarios. Notably, our findings reveal that the optimal distribution rarely includes data rate DR9, while data rate DR8 significantly contributes to the goodput and energy efficiency optimizations.
Abstract:Wireless communication systems must increasingly support a multitude of machine-type communications (MTC) devices, thus calling for advanced strategies for active user detection (AUD). Recent literature has delved into AUD techniques based on compressed sensing, highlighting the critical role of signal sparsity. This study investigates the relationship between frequency diversity and signal sparsity in the AUD problem. Single-antenna users transmit multiple copies of non-orthogonal pilots across multiple frequency channels and the base station independently performs AUD in each channel using the orthogonal matching pursuit algorithm. We note that, although frequency diversity may improve the likelihood of successful reception of the signals, it may also damage the channel sparsity level, leading to important trade-offs. We show that a sparser signal significantly benefits AUD, surpassing the advantages brought by frequency diversity in scenarios with limited temporal resources and/or high numbers of receive antennas. Conversely, with longer pilots and fewer receive antennas, investing in frequency diversity becomes more impactful, resulting in a tenfold AUD performance improvement.
Abstract:In this letter, we study an attack that leverages a reconfigurable intelligent surface (RIS) to induce harmful interference toward multiple users in massive multiple-input multiple-output (mMIMO) systems during the data transmission phase. We propose an efficient and flexible weighted-sum projected gradient-based algorithm for the attacker to optimize the RIS reflection coefficients without knowing legitimate user channels. To counter such a threat, we propose two reception strategies. Simulation results demonstrate that our malicious algorithm outperforms baseline strategies while offering adaptability for targeting specific users. At the same time, our results show that our mitigation strategies are effective even if only an imperfect estimate of the cascade RIS channel is available.
Abstract:Managing inter-cell interference is among the major challenges in a wireless network, more so when strict quality of service needs to be guaranteed such as in ultra-reliable low latency communications (URLLC) applications. This study introduces a novel intelligent interference management framework for a local 6G network that allocates resources based on interference prediction. The proposed algorithm involves an advanced signal pre-processing technique known as empirical mode decomposition followed by prediction of each decomposed component using the sequence-to-one transformer algorithm. The predicted interference power is then used to estimate future signal-to-interference plus noise ratio, and subsequently allocate resources to guarantee the high reliability required by URLLC applications. Finally, an interference cancellation scheme is explored based on the predicted interference signal with the transformer model. The proposed sequence-to-one transformer model exhibits its robustness for interference prediction. The proposed scheme is numerically evaluated against two baseline algorithms, and is found that the root mean squared error is reduced by up to 55% over a baseline scheme.
Abstract:Effective resource allocation is a crucial requirement to achieve the stringent performance targets of ultra-reliable low-latency communication (URLLC) services. Predicting future interference and utilizing it to design efficient interference management algorithms is one way to allocate resources for URLLC services effectively. This paper proposes an empirical mode decomposition (EMD) based hybrid prediction method to predict the interference and allocate resources for downlink based on the prediction results. EMD is used to decompose the past interference values faced by the user equipment. Long short-term memory and auto-regressive integrated moving average methods are used to predict the decomposed components. The final predicted interference value is reconstructed using individual predicted values of decomposed components. It is found that such a decomposition-based prediction method reduces the root mean squared error of the prediction by $20 - 25\%$. The proposed resource allocation algorithm utilizing the EMD-based interference prediction was found to meet near-optimal allocation of resources and correspondingly results in $2-3$ orders of magnitude lower outage compared to state-of-the-art baseline prediction algorithm-based resource allocation.
Abstract:In many emerging Internet of Things (IoT) applications, the freshness of the is an important design criterion. Age of Information (AoI) quantifies the freshness of the received information or status update. This work considers a setup of deployed IoT devices in an IoT network; multiple unmanned aerial vehicles (UAVs) serve as mobile relay nodes between the sensors and the base station. We formulate an optimization problem to jointly plan the UAVs' trajectory, while minimizing the AoI of the received messages and the devices' energy consumption. The solution accounts for the UAVs' battery lifetime and flight time to recharging depots to ensure the UAVs' green operation. The complex optimization problem is efficiently solved using a deep reinforcement learning algorithm. In particular, we propose a deep Q-network, which works as a function approximation to estimate the state-action value function. The proposed scheme is quick to converge and results in a lower ergodic age and ergodic energy consumption when compared with benchmark algorithms such as greedy algorithm (GA), nearest neighbour (NN), and random-walk (RW).
Abstract:Many emerging Internet of Things (IoT) applications rely on information collected by sensor nodes where the freshness of information is an important criterion. \textit{Age of Information} (AoI) is a metric that quantifies information timeliness, i.e., the freshness of the received information or status update. This work considers a setup of deployed sensors in an IoT network, where multiple unmanned aerial vehicles (UAVs) serve as mobile relay nodes between the sensors and the base station. We formulate an optimization problem to jointly plan the UAVs' trajectory, while minimizing the AoI of the received messages. This ensures that the received information at the base station is as fresh as possible. The complex optimization problem is efficiently solved using a deep reinforcement learning (DRL) algorithm. In particular, we propose a deep Q-network, which works as a function approximation to estimate the state-action value function. The proposed scheme is quick to converge and results in a lower AoI than the random walk scheme. Our proposed algorithm reduces the average age by approximately $25\%$ and requires down to $50\%$ less energy when compared to the baseline scheme.
Abstract:In this paper, a multi-objective optimization problem (MOOP) is proposed for maximizing the achievable finite blocklength (FBL) rate while minimizing the utilized channel blocklengths (CBLs) in a reconfigurable intelligent surface (RIS)-assisted short packet communication system. The formulated MOOP has two objective functions namely maximizing the total FBL rate with a target error probability, and minimizing the total utilized CBLs which is directly proportional to the transmission duration. The considered MOOP variables are the base station (BS) transmit power, number of CBLs, and passive beamforming at the RIS. Since the proposed non-convex problem is intractable to solve, the Tchebyshev method is invoked to transform it into a single-objective OP, then the alternating optimization (AO) technique is employed to iteratively obtain optimized parameters in three main sub-problems. The numerical results show a fundamental trade-off between maximizing the achievable rate in the FBL regime and reducing the transmission duration. Also, the applicability of RIS technology is emphasized in reducing the utilized CBLs while increasing the achievable rate significantly.
Abstract:Ultra reliable low latency communications (URLLC) is a new service class introduced in 5G which is characterized by strict reliability $(1-10^{-5})$ and low latency requirements (1 ms). To meet these requisites, several strategies like overprovisioning of resources and channel-predictive algorithms have been developed. This paper describes the application of a Nonlinear Autoregressive Neural Network (NARNN) as a novel approach to forecast interference levels in a wireless system for the purpose of efficient resource allocation. Accurate interference forecasts also grant the possibility of meeting specific outage probability requirements in URLLC scenarios. Performance of this proposal is evaluated upon the basis of NARNN predictions accuracy and system resource usage. Our proposed approach achieved a promising mean absolute percentage error of 7.8 % on interference predictions and also reduced the resource usage in up to 15 % when compared to a recently proposed interference prediction algorithm.