Abstract:The metaverse is expected to provide immersive entertainment, education, and business applications. However, virtual reality (VR) transmission over wireless networks is data- and computation-intensive, making it critical to introduce novel solutions that meet stringent quality-of-service requirements. With recent advances in edge intelligence and deep learning, we have developed a novel multi-view synthesizing framework that can efficiently provide computation, storage, and communication resources for wireless content delivery in the metaverse. We propose a three-dimensional (3D)-aware generative model that uses collections of single-view images. These single-view images are transmitted to a group of users with overlapping fields of view, which avoids massive content transmission compared to transmitting tiles or whole 3D models. We then present a federated learning approach to guarantee an efficient learning process. The training performance can be improved by characterizing the vertical and horizontal data samples with a large latent feature space, while low-latency communication can be achieved with a reduced number of transmitted parameters during federated learning. We also propose a federated transfer learning framework to enable fast domain adaptation to different target domains. Simulation results have demonstrated the effectiveness of our proposed federated multi-view synthesizing framework for VR content delivery.
Abstract:As a key technology in metaversa, wireless ultimate extended reality (XR) has attracted extensive attentions from both industry and academia. However, the stringent latency and ultra-high data rates requirements have hindered the development of wireless ultimate XR. Instead of transmitting the original source data bit-by-bit, semantic communications focus on the successful delivery of semantic information contained in the source, which have shown great potentials in reducing the data traffic of wireless systems. Inspired by semantic communications, this article develops a joint semantic sensing, rendering, and communication framework for wireless ultimate XR. In particular, semantic sensing is used to improve the sensing efficiency by exploring the spatial-temporal distributions of semantic information. Semantic rendering is designed to reduce the costs on semantically-redundant pixels. Next, semantic communications are adopted for high data transmission efficiency in wireless ultimate XR. Then, two case studies are provided to demonstrate the effectiveness of the proposed framework. Finally, potential research directions are identified to boost the development of semantic-aware wireless ultimate XR.