Abstract:Physics-informed machine learning provides an approach to combining data and governing physics laws for solving complex partial differential equations (PDEs). However, efficiently solving PDEs with varying parameters and changing initial conditions and boundary conditions (ICBCs) with theoretical guarantees remains an open challenge. We propose a hybrid framework that uses a neural network to learn B-spline control points to approximate solutions to PDEs with varying system and ICBC parameters. The proposed network can be trained efficiently as one can directly specify ICBCs without imposing losses, calculate physics-informed loss functions through analytical formulas, and requires only learning the weights of B-spline functions as opposed to both weights and basis as in traditional neural operator learning methods. We provide theoretical guarantees that the proposed B-spline networks serve as universal approximators for the set of solutions of PDEs with varying ICBCs under mild conditions and establish bounds on the generalization errors in physics-informed learning. We also demonstrate in experiments that the proposed B-spline network can solve problems with discontinuous ICBCs and outperforms existing methods, and is able to learn solutions of 3D dynamics with diverse initial conditions.
Abstract:Control systems are indispensable for ensuring the safety of cyber-physical systems (CPS), spanning various domains such as automobiles, airplanes, and missiles. Safeguarding CPS necessitates runtime methodologies that continuously monitor safety-critical conditions and respond in a verifiably safe manner. A fundamental aspect of many safety approaches involves predicting the future behavior of systems. However, achieving this requires accurate models that can operate in real time. Motivated by DeepONets, we propose a novel strategy that combines the inductive bias of B-splines with data-driven neural networks to facilitate real-time predictions of CPS behavior. We introduce our hybrid B-spline neural operator, establishing its capability as a universal approximator and providing rigorous bounds on the approximation error. These findings are applicable to a broad class of nonlinear autonomous systems and are validated through experimentation on a controlled 6-degree-of-freedom (DOF) quadrotor with a 12 dimensional state space. Furthermore, we conduct a comparative analysis of different network architectures, specifically fully connected networks (FCNN) and recurrent neural networks (RNN), to elucidate the practical utility and trade-offs associated with each architecture in real-world scenarios.