Abstract:Large volumes of experimental and simulation aerodynamic data have been rapidly advancing aerodynamic shape optimization (ASO) via machine learning (ML), whose effectiveness has been growing thanks to continued developments in deep learning. In this review, we first introduce the state of the art and the unsolved challenges in ASO. Next, we present a description of ML fundamentals and detail the ML algorithms that have succeeded in ASO. Then we review ML applications contributing to ASO from three fundamental perspectives: compact geometric design space, fast aerodynamic analysis, and efficient optimization architecture. In addition to providing a comprehensive summary of the research, we comment on the practicality and effectiveness of the developed methods. We show how cutting-edge ML approaches can benefit ASO and address challenging demands like interactive design optimization. However, practical large-scale design optimizations remain a challenge due to the costly ML training expense. A deep coupling of ML model construction with ASO prior experience and knowledge, such as taking physics into account, is recommended to train ML models effectively.
Abstract:We present a parsimonious surrogate framework for learning high dimensional parametric maps from limited training data. The need for parametric surrogates arises in many applications that require repeated queries of complex computational models. These applications include such "outer-loop" problems as Bayesian inverse problems, optimal experimental design, and optimal design and control under uncertainty, as well as real time inference and control problems. Many high dimensional parametric mappings admit low dimensional structure, which can be exploited by mapping-informed reduced bases of the inputs and outputs. Exploiting this property, we develop a framework for learning low dimensional approximations of such maps by adaptively constructing ResNet approximations between reduced bases of their inputs and output. Motivated by recent approximation theory for ResNets as discretizations of control flows, we prove a universal approximation property of our proposed adaptive projected ResNet framework, which motivates a related iterative algorithm for the ResNet construction. This strategy represents a confluence of the approximation theory and the algorithm since both make use of sequentially minimizing flows. In numerical examples we show that these parsimonious, mapping-informed architectures are able to achieve remarkably high accuracy given few training data, making them a desirable surrogate strategy to be implemented for minimal computational investment in training data generation.