Abstract:Edge intelligence is an emerging technology where the base stations located at the edge of the network are equipped with computing units that provide machine learning services to the end users. To provide high-quality services in a cost-efficient way, the wireless and computing resources need to be dimensioned carefully. In this paper, we address the problem of resource dimensioning in a single-cell system that supports edge video analytics under latency and accuracy constraints. We show that the resource-dimensioning problem can be transformed into a convex optimization problem, and we provide numerical results that give insights into the trade-offs between the wireless and computing resources for varying cell sizes and for varying intensity of incoming tasks. Overall, we observe that the wireless and computing resources exhibit opposite trends; the wireless resources favor from smaller cells, where high attenuation losses are avoided, and the computing resources favor from larger cells, where statistical multiplexing allows for computing more tasks. We also show that small cells with low loads have high per-request costs, even when the wireless resources are increased to compensate for the low multiplexing gain at the servers.
Abstract:Federated Edge Learning (FEEL) is a distributed machine learning technique where each device contributes to training a global inference model by independently performing local computations with their data. More recently, FEEL has been merged with over-the-air computation (OAC), where the global model is calculated over the air by leveraging the superposition of analog signals. However, when implementing FEEL with OAC, there is the challenge on how to precode the analog signals to overcome any time misalignment at the receiver. In this work, we propose a novel synchronization-free method to recover the parameters of the global model over the air without requiring any prior information about the time misalignments. For that, we construct a convex optimization based on the norm minimization problem to directly recover the global model by solving a convex semi-definite program. The performance of the proposed method is evaluated in terms of accuracy and convergence via numerical experiments. We show that our proposed algorithm is close to the ideal synchronized scenario by $10\%$, and performs $4\times$ better than the simple case where no recovering method is used.
Abstract:Edge intelligence is a scalable solution for analyzing distributed data, but it cannot provide reliable services in large-scale cellular networks unless the inherent aspects of fading and interference are also taken into consideration. In this paper, we present the first mathematical framework for modelling edge video analytics in multi-cell cellular systems. We derive the expressions for the coverage probability, the ergodic capacity, the probability of successfully completing the video analytics within a target delay requirement, and the effective frame rate. We also analyze the effect of the system parameters on the accuracy of the detection algorithm, the supported frame rate at the edge server, and the system fairness.