Abstract:To date, power electronics parameter design tasks are usually tackled using detailed optimization approaches with detailed simulations or using brute force grid search grid search with very fast simulations. A new method, named "Continuously Adapting Random Sampling" (CARS) is proposed, which provides a continuous method in between. This allows for very fast, and / or large amounts of simulations, but increasingly focuses on the most promising parameter ranges. Inspirations are drawn from multi-armed bandit research and lead to prioritized sampling of sub-domains in one high-dimensional parameter tensor. Performance has been evaluated on three exemplary power electronic use-cases, where resulting designs appear competitive to genetic algorithms, but additionally allow for highly parallelizable simulation, as well as continuous progression between explorative and exploitative settings.
Abstract:The optimization of electrical circuits is a difficult and time-consuming process performed by experts, but also increasingly by sophisticated algorithms. In this paper, a reinforcement learning (RL) approach is adapted to optimize a LLC converter at multiple operation points corresponding to different output powers at high converter efficiency at different switching frequencies. During a training period, the RL agent learns a problem specific optimization policy enabling optimizations for any objective and boundary condition within a pre-defined range. The results show, that the trained RL agent is able to solve new optimization problems based on LLC converter simulations using Fundamental Harmonic Approximation (FHA) within 50 tuning steps for two operation points with power efficiencies greater than 90%. Therefore, this AI technique provides the potential to augment expert-driven design processes with data-driven strategy extraction in the field of power electronics and beyond.