Abstract:Semantic Edge Computing (SEC) and Semantic Communications (SemComs) have been proposed as viable approaches to achieve real-time edge-enabled intelligence in sixth-generation (6G) wireless networks. On one hand, SemCom leverages the strength of Deep Neural Networks (DNNs) to encode and communicate the semantic information only, while making it robust to channel distortions by compensating for wireless effects. Ultimately, this leads to an improvement in the communication efficiency. On the other hand, SEC has leveraged distributed DNNs to divide the computation of a DNN across different devices based on their computational and networking constraints. Although significant progress has been made in both fields, the literature lacks a systematic view to connect both fields. In this work, we fulfill the current gap by unifying the SEC and SemCom fields. We summarize the research problems in these two fields and provide a comprehensive review of the state of the art with a focus on their technical strengths and challenges.
Abstract:Deep neural networks (DNN) have been widely used and play a major role in the field of computer vision and autonomous navigation. However, these DNNs are computationally complex and their deployment over resource-constrained platforms is difficult without additional optimizations and customization. In this manuscript, we describe an overview of DNN architecture and propose methods to reduce computational complexity in order to accelerate training and inference speeds to fit them on edge computing platforms with low computational resources.
Abstract:Genetic algorithms are modeled after the biological evolutionary processes that use natural selection to select the best species to survive. They are heuristics based and low cost to compute. Genetic algorithms use selection, crossover, and mutation to obtain a feasible solution to computational problems. In this paper, we describe our genetic optimization algorithms to a mission-critical and constraints-aware computation problem.
Abstract:Classical optimization algorithms in machine learning often take a long time to compute when applied to a multi-dimensional problem and require a huge amount of CPU and GPU resource. Quantum parallelism has a potential to speed up machine learning algorithms. We describe a generic mathematical model to leverage quantum parallelism to speed-up machine learning algorithms. We also apply quantum machine learning and quantum parallelism applied to a $3$-dimensional image that vary with time.
Abstract:In a resource-constrained, contested environment, computing resources need to be aware of possible size, weight, and power (SWaP) restrictions. SWaP-aware computational efficiency depends upon optimization of computational resources and intelligent time versus efficiency tradeoffs in decision making. In this paper we address the complexity of various optimization strategies related to low SWaP computing. Due to these restrictions, only a small subset of less complicated and fast computable algorithms can be used for tactical, adaptive computing.
Abstract:There is a subset of computational problems that are computable in polynomial time for which an existing algorithm may not complete due to a lack of high performance technology on a mission field. We define a subclass of deterministic polynomial time complexity class called mission class, as many polynomial problems are not computable in mission time. By focusing on such subclass of languages in the context for successful military applications, we also discuss their computational and communicational constraints. We investigate feasible (non)linear models that will minimize energy and maximize memory, efficiency, and computational power, and also provide an approximate solution obtained within a pre-determined length of computation time using limited resources so that an optimal solution to a language could be determined.