Abstract:We explore a reinforcement learning strategy to automate and accelerate h/p-multigrid methods in high-order solvers. Multigrid methods are very efficient but require fine-tuning of numerical parameters, such as the number of smoothing sweeps per level and the correction fraction (i.e., proportion of the corrected solution that is transferred from a coarser grid to a finer grid). The objective of this paper is to use a proximal policy optimization algorithm to automatically tune the multigrid parameters and, by doing so, improve stability and efficiency of the h/p-multigrid strategy. Our findings reveal that the proposed reinforcement learning h/p-multigrid approach significantly accelerates and improves the robustness of steady-state simulations for one dimensional advection-diffusion and nonlinear Burgers' equations, when discretized using high-order h/p methods, on uniform and nonuniform grids.