Brown University
Abstract:This study develops a deep learning-based approach to automate inbound load plan adjustments for a large transportation and logistics company. It addresses a critical challenge for the efficient and resilient planning of E-commerce operations in presence of increasing uncertainties. The paper introduces an innovative data-driven approach to inbound load planning. Leveraging extensive historical data, the paper presents a two-stage decision-making process using deep learning and conformal prediction to provide scalable, accurate, and confidence-aware solutions. The first stage of the prediction is dedicated to tactical load-planning, while the second stage is dedicated to the operational planning, incorporating the latest available data to refine the decisions at the finest granularity. Extensive experiments compare traditional machine learning models and deep learning methods. They highlight the importance and effectiveness of the embedding layers for enhancing the performance of deep learning models. Furthermore, the results emphasize the efficacy of conformal prediction to provide confidence-aware prediction sets. The findings suggest that data-driven methods can substantially improve decision making in inbound load planning, offering planners a comprehensive, trustworthy, and real-time framework to make decisions. The initial deployment in the industry setting indicates a high accuracy of the proposed framework.
Abstract:This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice" and provides an introduction to Differential Privacy. It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed, and providing the key desiderata of a robust privacy definition. It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis. Following that, it formalizes the definition of Differential Privacy and its inherent properties, including composition, post-processing immunity, and group privacy. The chapter also reviews the basic techniques and mechanisms commonly used to implement Differential Privacy in its pure and approximate forms.
Abstract:The integration of renewable energy sources (RES) into power grids presents significant challenges due to their intrinsic stochasticity and uncertainty, necessitating the development of new techniques for reliable and efficient forecasting. This paper proposes a method combining probabilistic forecasting and Gaussian copula for day-ahead prediction and scenario generation of load, wind, and solar power in high-dimensional contexts. By incorporating weather covariates and restoring spatio-temporal correlations, the proposed method enhances the reliability of probabilistic forecasts in RES. Extensive numerical experiments compare the effectiveness of different time series models, with performance evaluated using comprehensive metrics on a real-world and high-dimensional dataset from Midcontinent Independent System Operator (MISO). The results highlight the importance of weather information and demonstrate the efficacy of the Gaussian copula in generating realistic scenarios, with the proposed weather-informed Temporal Fusion Transformer (WI-TFT) model showing superior performance.
Abstract:The Predict-Then-Optimize framework uses machine learning models to predict unknown parameters of an optimization problem from exogenous features before solving. This setting is common to many real-world decision processes, and recently it has been shown that decision quality can be substantially improved by solving and differentiating the optimization problem within an end-to-end training loop. However, this approach requires significant computational effort in addition to handcrafted, problem-specific rules for backpropagation through the optimization step, challenging its applicability to a broad class of optimization problems. This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models. The approach is generic, and based on an adaptation of the Learning-to-Optimize paradigm, from which a rich variety of existing techniques can be employed. Experimental evaluations show the ability of several Learning-to-Optimize methods to provide efficient and accurate solutions to an array of challenging Predict-Then-Optimize problems.
Abstract:Access to granular demand data is essential for the net zero transition; it allows for accurate profiling and active demand management as our reliance on variable renewable generation increases. However, public release of this data is often impossible due to privacy concerns. Good quality synthetic data can circumnavigate this issue. Despite significant research on generating synthetic smart meter data, there is still insufficient work on creating a consistent evaluation framework. In this paper, we investigate how common frameworks used by other industries leveraging synthetic data, can be applied to synthetic smart meter data, such as fidelity, utility and privacy. We also recommend specific metrics to ensure that defining aspects of smart meter data are preserved and test the extent to which privacy can be protected using differential privacy. We show that standard privacy attack methods like reconstruction or membership inference attacks are inadequate for assessing privacy risks of smart meter datasets. We propose an improved method by injecting training data with implausible outliers, then launching privacy attacks directly on these outliers. The choice of $\epsilon$ (a metric of privacy loss) significantly impacts privacy risk, highlighting the necessity of performing these explicit privacy tests when making trade-offs between fidelity and privacy.
Abstract:Recent years have witnessed increasing interest in optimization proxies, i.e., machine learning models that approximate the input-output mapping of parametric optimization problems and return near-optimal feasible solutions. Following recent work by (Nellikkath & Chatzivasileiadis, 2021), this paper reconsiders the optimality verification problem for optimization proxies, i.e., the determination of the worst-case optimality gap over the instance distribution. The paper proposes a compact formulation for optimality verification and a gradient-based primal heuristic that brings substantial computational benefits to the original formulation. The compact formulation is also more general and applies to non-convex optimization problems. The benefits of the compact formulation are demonstrated on large-scale DC Optimal Power Flow and knapsack problems.
Abstract:Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains. Some SDMU applications are naturally modeled as Multistage Stochastic Optimization Problems (MSPs), but the resulting optimizations are notoriously challenging from a computational standpoint. Under assumptions of convexity and stage-wise independence of the uncertainty, the resulting optimization can be solved efficiently using Stochastic Dual Dynamic Programming (SDDP). Two-stage Linear Decision Rules (TS-LDRs) have been proposed to solve MSPs without the stage-wise independence assumption. TS-LDRs are computationally tractable, but using a policy that is a linear function of past observations is typically not suitable for non-convex environments arising, for example, in energy systems. This paper introduces a novel approach, Two-Stage General Decision Rules (TS-GDR), to generalize the policy space beyond linear functions, making them suitable for non-convex environments. TS-GDR is a self-supervised learning algorithm that trains the nonlinear decision rules using stochastic gradient descent (SGD); its forward passes solve the policy implementation optimization problems, and the backward passes leverage duality theory to obtain closed-form gradients. The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-DDR). The method inherits the flexibility and computational performance of Deep Learning methodologies to solve SDMU problems generally tackled through large-scale optimization techniques. Applied to the Long-Term Hydrothermal Dispatch (LTHD) problem using actual power system data from Bolivia, the TS-DDR not only enhances solution quality but also significantly reduces computation times by several orders of magnitude.
Abstract:Optimal Power Flow (OPF) is a valuable tool for power system operators, but it is a difficult problem to solve for large systems. Machine Learning (ML) algorithms, especially Neural Networks-based (NN) optimization proxies, have emerged as a promising new tool for solving OPF, by estimating the OPF solution much faster than traditional methods. However, these ML algorithms act as black boxes, and it is hard to assess their worst-case performance across the entire range of possible inputs than an OPF can have. Previous work has proposed a mixed-integer programming-based methodology to quantify the worst-case violations caused by a NN trained to estimate the OPF solution, throughout the entire input domain. This approach, however, does not scale well to large power systems and more complex NN models. This paper addresses these issues by proposing a scalable algorithm to compute worst-case violations of NN proxies used for approximating large power systems within a reasonable time limit. This will help build trust in ML models to be deployed in large industry-scale power grids.
Abstract:This paper presents Dual Lagrangian Learning (DLL), a principled learning methodology that combines conic duality theory with the representation power of ML models. DLL leverages conic duality to provide dual-feasible solutions, and therefore valid Lagrangian dual bounds, for parametric linear and nonlinear conic optimization problems. The paper introduces differentiable conic projection layers, a systematic dual completion procedure, and a self-supervised learning framework. The effectiveness of DLL is demonstrated on linear and nonlinear parametric optimization problems for which DLL provides valid dual bounds within 0.5% of optimality.
Abstract:This paper introduces Dual Interior Point Learning (DIPL) and Dual Supergradient Learning (DSL) to learn dual feasible solutions to parametric linear programs with bounded variables, which are pervasive across many industries. DIPL mimics a novel dual interior point algorithm while DSL mimics classical dual supergradient ascent. DIPL and DSL ensure dual feasibility by predicting dual variables associated with the constraints then exploiting the flexibility of the duals of the bound constraints. DIPL and DSL complement existing primal learning methods by providing a certificate of quality. They are shown to produce high-fidelity dual-feasible solutions to large-scale optimal power flow problems providing valid dual bounds under 0.5% optimality gap.