Abstract:We present a multisensor fusion framework for the onboard real-time navigation of a quadrotor in an indoor environment. The framework integrates sensor readings from an Inertial Measurement Unit (IMU), a camera-based object detection algorithm, and an Ultra-WideBand (UWB) localisation system. Often the sensor readings are not always readily available, leading to inaccurate pose estimation and hence poor navigation performance. To effectively handle and fuse sensor readings, and accurately estimate the pose of the quadrotor for tracking a predefined trajectory, we design a Maximum Correntropy Criterion Kalman Filter (MCC-KF) that can manage intermittent observations. The MCC-KF is designed to improve the performance of the estimation process when is done with a Kalman Filter (KF), since KFs are likely to degrade dramatically in practical scenarios in which noise is non-Gaussian (especially when the noise is heavy-tailed). To evaluate the performance of the MCC-KF, we compare it with a previously designed Kalman filter by the authors. Through this comparison, we aim to demonstrate the effectiveness of the MCC-KF in handling indoor navigation missions. The simulation results show that our presented framework offers low positioning errors, while effectively handling intermittent sensor measurements.
Abstract:We propose a multisensor fusion framework for onboard real-time navigation of a quadrotor in an indoor environment, by integrating sensor readings from an Inertial Measurement Unit (IMU), a camera-based object detection algorithm, and an Ultra-WideBand (UWB) localization system. The sensor readings from the camera-based object detection algorithm and the UWB localization system arrive intermittently, since the measurements are not readily available. We design a Kalman filter that manages intermittent observations in order to handle and fuse the readings and estimate the pose of the quadrotor for tracking a predefined trajectory. The system is implemented via a Hardware-in-the-loop (HIL) simulation technique, in which the dynamic model of the quadrotor is simulated in an open-source 3D robotics simulator tool, and the whole navigation system is implemented on Artificial Intelligence (AI) enabled edge GPU. The simulation results show that our proposed framework offers low positioning and trajectory errors, while handling intermittent sensor measurements.
Abstract:Latency is inherent in almost all real-world networked applications. In this paper, we propose a distributed allocation strategy over multi-agent networks with delayed communications. The state of each agent (or node) represents its share of assigned resources out of a fixed amount (equal to overall demand). Every node locally updates its state toward optimizing a global allocation cost function via received information of its neighbouring nodes even when the data exchange over the network is heterogeneously delayed at different links. The update is based on the alternating direction method of multipliers (ADMM) formulation subject to both sum-preserving coupling-constraint and local box-constraints. The solution is derivative-free and holds for general (not necessarily differentiable) convex cost models. We use the notion of augmented consensus over undirected networks to model delayed information exchange and for convergence analysis. We simulate our \textit{delay-tolerant} algorithm for
Abstract:We propose a linear time-difference-of-arrival (TDOA) measurement model to improve \textit{distributed} estimation performance for localized target tracking. We design distributed filters over sparse (possibly large-scale) communication networks using consensus-based data-fusion techniques. The proposed distributed and localized tracking protocols considerably reduce the sensor network's required connectivity and communication rate. We, further, consider $\kappa$-redundant observability and fault-tolerant design in case of losing communication links or sensor nodes. We present the minimal conditions on the remaining sensor network (after link/node removal) such that the distributed observability is still preserved and, thus, the sensor network can track the (single) maneuvering target. The motivation is to reduce the communication load versus the processing load, as the computational units are, in general, less costly than the communication devices. We evaluate the tracking performance via simulations in MATLAB.
Abstract:In this paper, we study stateless and stateful physics-based anomaly detection scenarios via distributed estimation over sensor networks. In the stateful case, the detector keeps track of the sensor residuals (i.e., the difference of estimated and true outputs) and reports an alarm if certain statistics of the recorded residuals deviate over a predefined threshold, e.g., \chi^2 (Chi-square) detector. Instead, only instantaneous deviation of the residuals raises the alarm in the stateless case without considering the history of the sensor outputs and estimation data. Given (approximate) false-alarm rate for both cases, we propose a probabilistic threshold design based on the noise statistics. We show by simulation that increasing the window length in the stateful case may not necessarily reduce the false-alarm rate. On the other hand, it adds unwanted delay to raise the alarm. The distributed aspect of the proposed detection algorithm enables local isolation of the faulty sensors with possible recovery solutions by adding redundant observationally-equivalent sensors. We, then, offer a mechanism to design Q-redundant distributed observers, robust to failure (or removal) of up to Q sensors over the network.
Abstract:Motivated by recent development in networking and parallel data-processing, we consider a distributed and localized finite-sum (or fixed-sum) allocation technique to solve resource-constrained convex optimization problems over multi-agent networks (MANs). Such networks include (smart) agents representing an intelligent entity capable of communication, processing, and decision-making. In particular, we consider problems subject to practical nonlinear constraints on the dynamics of the agents in terms of their communications and actuation capabilities (referred to as the node dynamics), e.g., networks of mobile robots subject to actuator saturation and quantized communication. The considered distributed sum-preserving optimization solution further enables adding purposeful nonlinear constraints, for example, sign-based nonlinearities, to reach convergence in predefined-time or robust to impulsive noise and disturbances in faulty environments. Moreover, convergence can be achieved under minimal network connectivity requirements among the agents; thus, the solution is applicable over dynamic networks where the channels come and go due to the agent's mobility and limited range. This paper discusses how various nonlinearity constraints on the optimization problem (e.g., collaborative allocation of resources) can be addressed for different applications via a distributed setup (over a network).
Abstract:This paper proposes a distributed attack detection and mitigation technique based on distributed estimation over a multi-agent network, where the agents take partial system measurements susceptible to (possible) biasing attacks. In particular, we assume that the system is not locally observable via the measurements in the direct neighborhood of any agent. First, for performance analysis in the attack-free case, we show that the proposed distributed estimation is unbiased with bounded mean-square deviation in steady-state. Then, we propose a residual-based strategy to locally detect possible attacks at agents. In contrast to the deterministic thresholds in the literature assuming an upper bound on the noise support, we define the thresholds on the residuals in a probabilistic sense. After detecting and isolating the attacked agent, a system-digraph-based mitigation strategy is proposed to replace the attacked measurement with a new observationally-equivalent one to recover potential observability loss. We adopt a graph-theoretic method to classify the agents based on their measurements, to distinguish between the agents recovering the system rank-deficiency and the ones recovering output-connectivity of the system digraph. The attack detection/mitigation strategy is specifically described for each type, which is of polynomial-order complexity for large-scale applications. Illustrative simulations support our theoretical results.
Abstract:The repetitive tracking task for time-varying systems (TVSs) with non-repetitive time-varying parameters, which is also called non-repetitive TVSs, is realized in this paper using iterative learning control (ILC). A machine learning (ML) based nominal model update mechanism, which utilizes the linear regression technique to update the nominal model at each ILC trial only using the current trial information, is proposed for non-repetitive TVSs in order to enhance the ILC performance. Given that the ML mechanism forces the model uncertainties to remain within the ILC robust tolerance, an ILC update law is proposed to deal with non-repetitive TVSs. How to tune parameters inside ML and ILC algorithms to achieve the desired aggregate performance is also provided. The robustness and reliability of the proposed method are verified by simulations. Comparison with current state-of-the-art demonstrates its superior control performance in terms of controlling precision. This paper broadens ILC applications from time-invariant systems to non-repetitive TVSs, adopts ML regression technique to estimate non-repetitive time-varying parameters between two ILC trials and proposes a detailed parameter tuning mechanism to achieve desired performance, which are the main contributions.
Abstract:Mobile networks are experiencing tremendous increase in data volume and user density. An efficient technique to alleviate this issue is to bring the data closer to the users by exploiting the caches of edge network nodes, such as fixed or mobile access points and even user devices. Meanwhile, the fusion of machine learning and wireless networks offers a viable way for network optimization as opposed to traditional optimization approaches which incur high complexity, or fail to provide optimal solutions. Among the various machine learning categories, reinforcement learning operates in an online and autonomous manner without relying on large sets of historical data for training. In this survey, reinforcement learning-aided mobile edge caching is presented, aiming at highlighting the achieved network gains over conventional caching approaches. Taking into account the heterogeneity of sixth generation (6G) networks in various wireless settings, such as fixed, vehicular and flying networks, learning-aided edge caching is presented, departing from traditional architectures. Furthermore, a categorization according to the desirable performance metric, such as spectral, energy and caching efficiency, average delay, and backhaul and fronthaul offloading is provided. Finally, several open issues are discussed, targeting to stimulate further interest in this important research field.
Abstract:In this paper, we consider the binary classification problem via distributed Support-Vector-Machines (SVM), where the idea is to train a network of agents, with limited share of data, to cooperatively learn the SVM classifier for the global database. Agents only share processed information regarding the classifier parameters and the gradient of the local loss functions instead of their raw data. In contrast to the existing work, we propose a continuous-time algorithm that incorporates network topology changes in discrete jumps. This hybrid nature allows us to remove chattering that arises because of the discretization of the underlying CT process. We show that the proposed algorithm converges to the SVM classifier over time-varying weight balanced directed graphs by using arguments from the matrix perturbation theory.