Abstract:Emergency Wireless Communication (EWC) networks adopt the User Datagram Protocol (UDP) to transmit scene images in real time for quickly assessing the extent of the damage. However, existing UDP-based EWC exhibits suboptimal performance under poor channel conditions since UDP lacks an Automatic Repeat reQuest (ARQ) mechanism. In addition, future EWC systems must not only enhance human decisionmaking during emergency response operations but also support Artificial Intelligence (AI)-driven approaches to improve rescue efficiency. The Deep Learning-based Semantic Communication (DL-based SemCom) emerges as a robust, efficient, and taskoriented transmission scheme, suitable for deployment in UDP based EWC. Due to the constraints in hardware capabilities and transmission resources, the EWC transmitter is unable to integrate sufficiently powerful NN model, thereby failing to achieve ideal performance under EWC scene. For EWC scene, we propose a performance-constrained semantic coding model, which considers the effects of the semantic noise and the channel noise. Then, we derive Cramer-Rao lower bound of the proposed semantic coding model, as guidance for the design of semantic codec to enhance its adaptability to semantic noise as well as channel noise. To further improve the system performance, we propose Digital-Analog transmission based Emergency Semantic Communication (DAESemCom) framework, which integrates the analog DL-based semantic coding and the digital Distributed Source Coding (DSC) schemes to leverage their respective advantages. The simulation results show that the proposed DA-ESemCom framework outperforms the classical Separated Source-Channel Coding (SSCC) and other DL-based Joint Source-Channel Coding (DL-based JSCC) schemes in terms of fidelity and detection performances.
Abstract:Task-Oriented Semantic Communication (TOSC) has been regarded as a promising communication framework, serving for various Artificial Intelligence (AI) task driven applications. The existing TOSC frameworks focus on extracting the full semantic features of source data and learning low-dimensional channel inputs to transmit them within limited bandwidth resources. Although transmitting full semantic features can preserve the integrity of data meaning, this approach does not attain the performance threshold of the TOSC. In this paper, we propose a Task-oriented Adaptive Semantic Communication (TasCom) framework, which aims to effectively facilitate the execution of AI tasks by only sending task-related semantic features. In the TasCom framework, we first propose a Generative AI (GAI) architecture based Generative Joint Source-Channel Coding (G-JSCC) for efficient semantic transmission. Then, an Adaptive Coding Controller (ACC) is proposed to find the optimal coding scheme for the proposed G-JSCC, which allows the semantic features with significant contributions to the AI task to preferentially occupy limited bandwidth resources for wireless transmission. Furthermore, we propose a generative training algorithm to train the proposed TasCom for optimal performance. The simulation results show that the proposed TasCom outperforms the existing TOSC and traditional codec schemes on the object detection and instance segmentation tasks at all considered channel conditions.