Abstract:The rapid growth of Internet of Things (IoT) applications necessitates robust resource allocation in future sixth-generation (6G) networks, particularly at the upper mid-band (7-15 GHz, FR3). This paper presents a novel intelligent reconfigurable surface (IRS)-assisted framework combining terrestrial IRS (TIRS) and aerial IRS (AIRS) mounted on low-altitude platform stations, to ensure reliable connectivity under severe line-of-sight (LoS) blockages. Distinguishing itself from prior work restricted to terrestrial IRS and mmWave and THz bands, this work targets the FR3 spectrum, the so-called Golden Band for 6G. The joint beamforming and user association (JBUA) problem is formulated as a mixed-integer nonlinear program (MINLP), solved through problem decomposition, zero-forcing beamforming, and a stable matching algorithm. Comprehensive simulations show our method approaches exhaustive search performance with significantly lower complexity, outperforming existing greedy and random baselines. These results provide a scalable blueprint for real-world 6G deployments, supporting massive IoT connectivity in challenging environments.
Abstract:The increasing demand for Internet of Things (IoT) applications has accelerated the need for robust resource allocation in sixth-generation (6G) networks. In this paper, we propose a reconfigurable intelligent surface (RIS)-assisted upper mid-band communication framework. To ensure robust connectivity under severe line-of-sight (LoS) blockages, we use a two-tier RIS structure comprising terrestrial RISs (TRISs) and high-altitude platform station (HAPS)-mounted RISs (HRISs). To maximize network sum rate, we formulate a joint beamforming, power allocation, and IoT device association (JBPDA) problem as a mixed-integer nonlinear program (MINLP). The formulated MINLP problem is challenging to solve directly; therefore, we tackle it via a decomposition approach. The zero-forcing (ZF) technique is used to optimize the beamforming matrix, a closed-form expression for power allocation is derived, and a stable matching-based algorithm is proposed for device-RIS association based on achievable data rates. Comprehensive simulations demonstrate that the proposed scheme approaches the performance of exhaustive search (ES) while exhibiting substantially lower complexity, and it consistently outperforms greedy search (GS) and random search (RS) baselines. Moreover, the proposed scheme converges much faster than the ES scheme.
Abstract:Organizations and enterprises across domains such as healthcare, finance, and scientific research are increasingly required to extract collective intelligence from distributed, siloed datasets while adhering to strict privacy, regulatory, and sovereignty requirements. Federated Learning (FL) enables collaborative model building without sharing sensitive raw data, but faces growing challenges posed by statistical heterogeneity, system diversity, and the computational burden from complex models. This study examines the potential of quantum-assisted federated learning, which could cut the number of parameters in classical models by polylogarithmic factors and thus lessen training overhead. Accordingly, we introduce QFed, a quantum-enabled federated learning framework aimed at boosting computational efficiency across edge device networks. We evaluate the proposed framework using the widely adopted FashionMNIST dataset. Experimental results show that QFed achieves a 77.6% reduction in the parameter count of a VGG-like model while maintaining an accuracy comparable to classical approaches in a scalable environment. These results point to the potential of leveraging quantum computing within a federated learning context to strengthen FL capabilities of edge devices.
Abstract:Anomaly detection in time-series data is a critical challenge with significant implications for network security. Recent quantum machine learning approaches, such as quantum kernel methods and variational quantum circuits, have shown promise in capturing complex data distributions for anomaly detection but remain constrained by limited qubit counts. We introduce in this work a novel Quantum Gated Recurrent Unit (QGRU)-based Generative Adversarial Network (GAN) employing Successive Data Injection (SuDaI) and a multi-metric gating strategy for robust network anomaly detection. Our model uniquely utilizes a quantum-enhanced generator that outputs parameters (mean and log-variance) of a Gaussian distribution via reparameterization, combined with a Wasserstein critic to stabilize adversarial training. Anomalies are identified through a novel gating mechanism that initially flags potential anomalies based on Gaussian uncertainty estimates and subsequently verifies them using a composite of critic scores and reconstruction errors. Evaluated on benchmark datasets, our method achieves a high time-series aware F1 score (TaF1) of 89.43% demonstrating superior capability in detecting anomalies accurately and promptly as compared to existing classical and quantum models. Furthermore, the trained QGRU-WGAN was deployed on real IBM Quantum hardware, where it retained high anomaly detection performance, confirming its robustness and practical feasibility on current noisy intermediate-scale quantum (NISQ) devices.
Abstract:This paper explores the application of a federated learning-based multi-agent reinforcement learning (MARL) strategy to enhance physical-layer security (PLS) in a multi-cellular network within the context of beyond 5G networks. At each cell, a base station (BS) operates as a deep reinforcement learning (DRL) agent that interacts with the surrounding environment to maximize the secrecy rate of legitimate users in the presence of an eavesdropper. This eavesdropper attempts to intercept the confidential information shared between the BS and its authorized users. The DRL agents are deemed to be federated since they only share their network parameters with a central server and not the private data of their legitimate users. Two DRL approaches, deep Q-network (DQN) and Reinforce deep policy gradient (RDPG), are explored and compared. The results demonstrate that RDPG converges more rapidly than DQN. In addition, we demonstrate that the proposed method outperforms the distributed DRL approach. Furthermore, the outcomes illustrate the trade-off between security and complexity.
Abstract:Cognitive radio networks (CRNs) are acknowledged for their ability to tackle the issue of spectrum under-utilization. In the realm of CRNs, this paper investigates the energy efficiency issue and addresses the critical challenge of optimizing system reliability for overlay CRN access mode. Randomly dispersed secondary users (SUs) serving as relays for primary users (PUs) are considered, in which one of these relays is designated to harvest energy through the time switching-energy harvesting (EH) protocol. Moreover, this relay amplifies-and-forwards (AF) the PU's messages and broadcasts them along with its own across cascaded $\kappa$-$\mu$ fading channels. The power splitting protocol is another EH approach utilized by the SU and PU receivers to enhance the amount of energy in their storage devices. In addition, the SU transmitters and the SU receiver are deployed with multiple antennas for reception and apply the maximal ratio combining approach. The outage probability is utilized to assess both networks' reliability. Then, an energy efficiency evaluation is performed to determine the effectiveness of EH on the system. Finally, an optimization problem is provided with the goal of maximizing the data rate of the SUs by optimizing the time switching and the power allocation parameters of the SU relay.
Abstract:This paper presents a reinforcement learning (RL) based approach to improve the physical layer security (PLS) of an underlay cognitive radio network (CRN) over cascaded channels. These channels are utilized in highly mobile networks such as cognitive vehicular networks (CVN). In addition, an eavesdropper aims to intercept the communications between secondary users (SUs). The SU receiver has full-duplex and energy harvesting capabilities to generate jamming signals to confound the eavesdropper and enhance security. Moreover, the SU transmitter extracts energy from ambient radio frequency signals in order to power subsequent transmissions to its intended receiver. To optimize the privacy and reliability of the SUs in a CVN, a deep Q-network (DQN) strategy is utilized where multiple DQN agents are required such that an agent is assigned at each SU transmitter. The objective for the SUs is to determine the optimal transmission power and decide whether to collect energy or transmit messages during each time period in order to maximize their secrecy rate. Thereafter, we propose a DQN approach to maximize the throughput of the SUs while respecting the interference threshold acceptable at the receiver of the primary user. According to our findings, our strategy outperforms two other baseline strategies in terms of security and reliability.
Abstract:In this paper, a reinforcement learning technique is employed to maximize the performance of a cognitive radio network (CRN). In the presence of primary users (PUs), it is presumed that two secondary users (SUs) access the licensed band within underlay mode. In addition, the SU transmitter is assumed to be an energy-constrained device that requires harvesting energy in order to transmit signals to their intended destination. Therefore, we propose that there are two main sources of energy; the interference of PUs' transmissions and ambient radio frequency (RF) sources. The SU will select whether to gather energy from PUs or only from ambient sources based on a predetermined threshold. The process of energy harvesting from the PUs' messages is accomplished via the time switching approach. In addition, based on a deep Q-network (DQN) approach, the SU transmitter determines whether to collect energy or transmit messages during each time slot as well as selects the suitable transmission power in order to maximize its average data rate. Our approach outperforms a baseline strategy and converges, as shown by our findings.
Abstract:Quantum computing may offer new approaches for advancing machine learning, including in complex tasks such as anomaly detection in network traffic. In this paper, we introduce a quantum generative adversarial network (QGAN) architecture for multivariate time-series anomaly detection that leverages variational quantum circuits (VQCs) in combination with a time-window shifting technique, data re-uploading, and successive data injection (SuDaI). The method encodes multivariate time series data as rotation angles. By integrating both data re-uploading and SuDaI, the approach maps classical data into quantum states efficiently, helping to address hardware limitations such as the restricted number of available qubits. In addition, the approach employs an anomaly scoring technique that utilizes both the generator and the discriminator output to enhance the accuracy of anomaly detection. The QGAN was trained using the parameter shift rule and benchmarked against a classical GAN. Experimental results indicate that the quantum model achieves a accuracy high along with high recall and F1-scores in anomaly detection, and attains a lower MSE compared to the classical model. Notably, the QGAN accomplishes this performance with only 80 parameters, demonstrating competitive results with a compact architecture. Tests using a noisy simulator suggest that the approach remains effective under realistic noise-prone conditions.




Abstract:Quantum machine learning consists in taking advantage of quantum computations to generate classical data. A potential application of quantum machine learning is to harness the power of quantum computers for generating classical data, a process essential to a multitude of applications such as enriching training datasets, anomaly detection, and risk management in finance. Given the success of Generative Adversarial Networks in classical image generation, the development of its quantum versions has been actively conducted. However, existing implementations on quantum computers often face significant challenges, such as scalability and training convergence issues. To address these issues, we propose LatentQGAN, a novel quantum model that uses a hybrid quantum-classical GAN coupled with an autoencoder. Although it was initially designed for image generation, the LatentQGAN approach holds potential for broader application across various practical data generation tasks. Experimental outcomes on both classical simulators and noisy intermediate scale quantum computers have demonstrated significant performance enhancements over existing quantum methods, alongside a significant reduction in quantum resources overhead.