Abstract:The increased integration of clean yet stochastic energy resources and the growing number of extreme weather events are narrowing the decision-making window of power grid operators. This time constraint is fueling a plethora of research on Machine Learning-, or ML-, based optimization proxies. While finding a fast solution is appealing, the inherent vulnerabilities of the learning-based methods are hindering their adoption. One of these vulnerabilities is data poisoning attacks, which adds perturbations to ML training data, leading to incorrect decisions. The impact of poisoning attacks on learning-based power system optimizers have not been thoroughly studied, which creates a critical vulnerability. In this paper, we examine the impact of data poisoning attacks on ML-based optimization proxies that are used to solve the DC Optimal Power Flow problem. Specifically, we compare the resilience of three different methods-a penalty-based method, a post-repair approach, and a direct mapping approach-against the adverse effects of poisoning attacks. We will use the optimality and feasibility of these proxies as performance metrics. The insights of this work will establish a foundation for enhancing the resilience of neural power system optimizers.
Abstract:As the complexities of Dynamic Data Driven Applications Systems increase, preserving their resilience becomes more challenging. For instance, maintaining power grid resilience is becoming increasingly complicated due to the growing number of stochastic variables (such as renewable outputs) and extreme weather events that add uncertainty to the grid. Current optimization methods have struggled to accommodate this rise in complexity. This has fueled the growing interest in data-driven methods used to operate the grid, leading to more vulnerability to cyberattacks. One such disruption that is commonly discussed is the adversarial disruption, where the intruder attempts to add a small perturbation to input data in order to "manipulate" the system operation. During the last few years, work on adversarial training and disruptions on the power system has gained popularity. In this paper, we will first review these applications, specifically on the most common types of adversarial disruptions: evasion and poisoning disruptions. Through this review, we highlight the gap between poisoning and evasion research when applied to the power grid. This is due to the underlying assumption that model training is secure, leading to evasion disruptions being the primary type of studied disruption. Finally, we will examine the impacts of data poisoning interventions and showcase how they can endanger power grid resilience.
Abstract:Machine learning (ML) solutions to indoor localization problems have become popular in recent years due to high positioning accuracy and low cost of implementation. This paper proposes a novel local nonparametric approach for solving localizations from high-dimensional Received Signal Strength Indicator (RSSI) values. Our approach consists of a sequence of classification algorithms that sequentially narrows down the possible space for location solutions into smaller neighborhoods. The idea of this sequential classification method is similar to the decision tree algorithm, but a key difference is our splitting of the dataset at each node is not based on features of input (i.e. RSSI values), but some discrete-valued variables generated from the output variable (i.e. the 3D real-world coordinates). The strength of our localization solution can be tuned to problem specifics by the appropriate choice of how to sequentially partition the the space of location into smaller neighborhoods. Using the publicly available indoor localization dataset UJIIndoorLoc, we evaluate our proposed method vs. the global ML algorithms for the dataset. The primary contribution of this paper is to introduce a novel local ML solution for indoor localization problems.