Abstract:Modern algorithms in the domain of Deep Reinforcement Learning (DRL) demonstrated remarkable successes; most widely known are those in game-based scenarios, from ATARI video games to Go and the StarCraft~\textsc{II} real-time strategy game. However, applications in the domain of modern Cyber-Physical Systems (CPS) that take advantage a vast variety of DRL algorithms are few. We assume that the benefits would be considerable: Modern CPS have become increasingly complex and evolved beyond traditional methods of modelling and analysis. At the same time, these CPS are confronted with an increasing amount of stochastic inputs, from volatile energy sources in power grids to broad user participation stemming from markets. Approaches of system modelling that use techniques from the domain of Artificial Intelligence (AI) do not focus on analysis and operation. In this paper, we describe the concept of Adversarial Resilience Learning (ARL) that formulates a new approach to complex environment checking and resilient operation: It defines two agent classes, attacker and defender agents. The quintessence of ARL lies in both agents exploring the system and training each other without any domain knowledge. Here, we introduce the ARL software architecture that allows to use a wide range of model-free as well as model-based DRL-based algorithms, and document results of concrete experiment runs on a complex power grid.