Abstract:This paper proposes a semantic pilot design for data-aided channel estimation in text-inclusive data transmission, using a large language model (LLM). In this scenario, channel impairments often appear as typographical errors in the decoded text, which can be corrected using an LLM. The proposed method compares the initially decoded text with the LLM-corrected version to identify reliable decoded symbols. A set of selected symbols, referred to as a semantic pilot, is used as an additional pilot for data-aided channel estimation. To the best of our knowledge, this work is the first to leverage semantic information for reliable symbol selection. Simulation results demonstrate that the proposed scheme outperforms conventional pilot-only estimation, achieving lower normalized mean squared error and phase error of the estimated channel, as well as reduced bit error rate.
Abstract:Grant-free (GF) access is essential for massive connectivity but faces collision risks due to uncoordinated transmissions. While user-side sensing can mitigate these collisions by enabling autonomous transmission decisions, conventional methods become ineffective in overloaded scenarios where active streams exceed receive antennas. To address this problem, we propose a differential stream sensing framework that reframes the problem from estimating the total stream count to isolating newly activated streams via covariance differencing. We analyze the covariance deviation induced by channel variations to establish a theoretical bound based on channel correlation for determining the sensing window size. To mitigate residual interference from finite sampling, a deep learning (DL) classifier is integrated. Simulations across both independent and identically distributed flat Rayleigh fading and standardized channel environments demonstrate that the proposed method consistently outperforms non-DL baselines and remains robust in overloaded scenarios.




Abstract:With the recent advancements in deep learning, semantic communication which transmits only task-oriented features, has rapidly emerged. However, since feature extraction relies on learning-based models, its performance fundamentally depends on the training dataset or tasks. For practical scenarios, it is essential to design a model that demonstrates robust performance regardless of dataset or tasks. In this correspondence, we propose a novel text transmission model that selects and transmits only a few characters and recovers the missing characters at the receiver using a large language model (LLM). Additionally, we propose a novel importance character extractor (ICE), which selects transmitted characters to enhance LLM recovery performance. Simulations demonstrate that the proposed filter selection by ICE outperforms random filter selection, which selects transmitted characters randomly. Moreover, the proposed model exhibits robust performance across different datasets and tasks and outperforms traditional bit-based communication in low signal-to-noise ratio conditions.




Abstract:Anomaly detection is a classical problem where the aim is to detect anomalous data that do not belong to the normal data distribution. Current state-of-the-art methods for anomaly detection on complex high-dimensional data are based on the generative adversarial network (GAN). However, the traditional GAN loss is not directly aligned with the anomaly detection objective: it encourages the distribution of the generated samples to overlap with the real data and so the resulting discriminator has been found to be ineffective as an anomaly detector. In this paper, we propose simple modifications to the GAN loss such that the generated samples lie at the boundary of the real data distribution. With our modified GAN loss, our anomaly detection method, called Fence GAN (FGAN), directly uses the discriminator score as an anomaly threshold. Our experimental results using the MNIST, CIFAR10 and KDD99 datasets show that Fence GAN yields the best anomaly classification accuracy compared to state-of-the-art methods.