Abstract:The multimedia content and streaming are a major means of information exchange in the modern era and there is an increasing demand for such services. This coupled with the advancement of future wireless networks B5G/6G and the proliferation of intelligent handheld mobile devices, has facilitated the availability of multimedia content to heterogeneous mobile users. Apart from the conventional video, the 360$^o$ videos have gained popularity with the emerging virtual reality applications. All formats of videos (conventional and 360$^o$) undergo processing, compression, and transmission across dynamic wireless channels with restricted bandwidth to facilitate the streaming services. This causes video impairments, leading to quality degradation and poses challenges in delivering good Quality-of-Experience (QoE) to the viewers. The QoE is a prominent subjective quality measure to assess multimedia services. This requires end-to-end QoE evaluation. Efficient multimedia streaming techniques can improve the service quality while dealing with dynamic network and end-user challenges. A paradigm shift in user-centric multimedia services is envisioned with a focus on Machine Learning (ML) based QoE modeling and streaming strategies. This survey paper presents a comprehensive overview of the overall and continuous, time varying QoE modeling for the purpose of QoE management in multimedia services. It also examines the recent research on intelligent and adaptive multimedia streaming strategies, with a special emphasis on ML based techniques for video (conventional and 360$^o$) streaming. This paper discusses the overall and continuous QoE modeling to optimize the end-user viewing experience, efficient video streaming with a focus on user-centric strategies, associated datasets for modeling and streaming, along with existing shortcoming and open challenges.
Abstract:Mobile systems will have to support multiple AI-based applications, each leveraging heterogeneous data sources through DNN architectures collaboratively executed within the network. To minimize the cost of the AI inference task subject to requirements on latency, quality, and - crucially - reliability of the inference process, it is vital to optimize (i) the set of sensors/data sources and (ii) the DNN architecture, (iii) the network nodes executing sections of the DNN, and (iv) the resources to use. To this end, we leverage dynamic gated neural networks with branches, and propose a novel algorithmic strategy called Quantile-constrained Inference (QIC), based upon quantile-Constrained policy optimization. QIC makes joint, high-quality, swift decisions on all the above aspects of the system, with the aim to minimize inference energy cost. We remark that this is the first contribution connecting gated dynamic DNNs with infrastructure-level decision making. We evaluate QIC using a dynamic gated DNN with stems and branches for optimal sensor fusion and inference, trained on the RADIATE dataset offering Radar, LiDAR, and Camera data, and real-world wireless measurements. Our results confirm that QIC matches the optimum and outperforms its alternatives by over 80%.
Abstract:The increasing pervasiveness of intelligent mobile applications requires to exploit the full range of resources offered by the mobile-edge-cloud network for the execution of inference tasks. However, due to the heterogeneity of such multi-tiered networks, it is essential to make the applications' demand amenable to the available resources while minimizing energy consumption. Modern dynamic deep neural networks (DNN) achieve this goal by designing multi-branched architectures where early exits enable sample-based adaptation of the model depth. In this paper, we tackle the problem of allocating sections of DNNs with early exits to the nodes of the mobile-edge-cloud system. By envisioning a 3-stage graph-modeling approach, we represent the possible options for splitting the DNN and deploying the DNN blocks on the multi-tiered network, embedding both the system constraints and the application requirements in a convenient and efficient way. Our framework -- named Feasible Inference Graph (FIN) -- can identify the solution that minimizes the overall inference energy consumption while enabling distributed inference over the multi-tiered network with the target quality and latency. Our results, obtained for DNNs with different levels of complexity, show that FIN matches the optimum and yields over 65% energy savings relative to a state-of-the-art technique for cost minimization.
Abstract:The battery-less Internet of Things (IoT) devices are a key element in the sustainable green initiative for the next-generation wireless networks. These battery-free devices use the ambient energy, harvested from the environment. The energy harvesting environment is dynamic and causes intermittent task execution. The harvested energy is stored in small capacitors and it is challenging to assure the application task execution. The main goal is to provide a mechanism to aggregate the sensor data and provide a sustainable application support in the distributed battery-less IoT network. We model the distributed IoT network system consisting of many battery-free IoT sensor hardware modules and heterogeneous IoT applications that are being supported in the device-edge-cloud continuum. The applications require sensor data from a distributed set of battery-less hardware modules and there is provision of joint control over the module actuators. We propose an application-aware task and energy manager (ATEM) for the IoT devices and a vector-synchronization based data aggregator (VSDA). The ATEM is supported by device-level federated energy harvesting and system-level energy-aware heterogeneous application management. In our proposed framework the data aggregator forecasts the available power from the ambient energy harvester using long-short-term-memory (LSTM) model and sets the device profile as well as the application task rates accordingly. Our proposed scheme meets the heterogeneous application requirements with negligible overhead; reduces the data loss and packet delay; increases the hardware component availability; and makes the components available sooner as compared to the state-of-the-art.