Abstract:Particle accelerator operation requires simultaneous optimization of multiple objectives. Multi-Objective Optimization (MOO) is particularly challenging due to trade-offs between the objectives. Evolutionary algorithms, such as genetic algorithm (GA), have been leveraged for many optimization problems, however, they do not apply to complex control problems by design. This paper demonstrates the power of differentiability for solving MOO problems using a Deep Differentiable Reinforcement Learning (DDRL) algorithm in particle accelerators. We compare DDRL algorithm with Model Free Reinforcement Learning (MFRL), GA and Bayesian Optimization (BO) for simultaneous optimization of heat load and trip rates in the Continuous Electron Beam Accelerator Facility (CEBAF). The underlying problem enforces strict constraints on both individual states and actions as well as cumulative (global) constraint for energy requirements of the beam. A physics-based surrogate model based on real data is developed. This surrogate model is differentiable and allows back-propagation of gradients. The results are evaluated in the form of a Pareto-front for two objectives. We show that the DDRL outperforms MFRL, BO, and GA on high dimensional problems.
Abstract:Traditional black-box optimization methods are inefficient when dealing with multi-point measurement, i.e. when each query in the control domain requires a set of measurements in a secondary domain to calculate the objective. In particle accelerators, emittance tuning from quadrupole scans is an example of optimization with multi-point measurements. Although the emittance is a critical parameter for the performance of high-brightness machines, including X-ray lasers and linear colliders, comprehensive optimization is often limited by the time required for tuning. Here, we extend the recently-proposed Bayesian Algorithm Execution (BAX) to the task of optimization with multi-point measurements. BAX achieves sample-efficiency by selecting and modeling individual points in the joint control-measurement domain. We apply BAX to emittance minimization at the Linac Coherent Light Source (LCLS) and the Facility for Advanced Accelerator Experimental Tests II (FACET-II) particle accelerators. In an LCLS simulation environment, we show that BAX delivers a 20x increase in efficiency while also being more robust to noise compared to traditional optimization methods. Additionally, we ran BAX live at both LCLS and FACET-II, matching the hand-tuned emittance at FACET-II and achieving an optimal emittance that was 24% lower than that obtained by hand-tuning at LCLS. We anticipate that our approach can readily be adapted to other types of optimization problems involving multi-point measurements commonly found in scientific instruments.
Abstract:Particle accelerators support a wide array of scientific, industrial, and medical applications. To meet the needs of these applications, accelerator physicists rely heavily on detailed simulations of the complicated particle beam dynamics through the accelerator. One of the most computationally expensive and difficult-to-model effects is the impact of Coherent Synchrotron Radiation (CSR). As a beam travels through a curved trajectory (e.g. due to a bending magnet), it emits radiation that in turn interacts with the rest of the beam. At each step through the trajectory, the electromagnetic field introduced by CSR (called the CSR wakefield) needs to computed and used when calculating the updates to the positions and momenta of every particle in the beam. CSR is one of the major drivers of growth in the beam emittance, which is a key metric of beam quality that is critical in many applications. The CSR wakefield is very computationally intensive to compute with traditional electromagnetic solvers, and this is a major limitation in accurately simulating accelerators. Here, we demonstrate a new approach for the CSR wakefield computation using a neural network solver structured in a way that is readily generalizable to new setups. We validate its performance by adding it to a standard beam tracking test problem and show a ten-fold speedup along with high accuracy.
Abstract:A novel approach to expedite design optimization of nonlinear beam dynamics in storage rings is proposed and demonstrated in this study. At each iteration, a neural network surrogate model is used to suggest new trial solutions in a multi-objective optimization task. The surrogate model is then updated with the new solutions, and this process is repeated until the final optimized solution is obtained. We apply this approach to optimize the nonlinear beam dynamics of the SPEAR3 storage ring, where sextupole knobs are adjusted to simultaneously improve the dynamic aperture and the momentum aperture. The approach is shown to converge to the Pareto front considerably faster than the genetic and particle swarm algorithms.