Abstract:Quantum Tiq-Taq-Toe is a well-known benchmark and playground for both quantum computing and machine learning. Despite its popularity, no reinforcement learning (RL) methods have been applied to Quantum Tiq-Taq-Toe. Although there has been some research on Quantum Chess this game is significantly more complex in terms of computation and analysis. Therefore, we study the combination of quantum computing and reinforcement learning in Quantum Tiq-Taq-Toe, which may serve as an accessible testbed for the integration of both fields. Quantum games are challenging to represent classically due to their inherent partial observability and the potential for exponential state complexity. In Quantum Tiq-Taq-Toe, states are observed through Measurement (a 3x3 matrix of state probabilities) and Move History (a 9x9 matrix of entanglement relations), making strategy complex as each move can collapse the quantum state.
Abstract:The Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is one of the most advanced algorithms in numerical black-box optimization. For noisy objective functions, several approaches were proposed to mitigate the noise, e.g., re-evaluations of the same solution or adapting the population size. In this paper, we devise a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise. We derive a theoretical lower bound of the expected improvement achieved in one iteration of CMA-ES, given an estimation of the noise level and the Lipschitz constant of the function's gradient. Solving for the maximum of the lower bound, we obtain a simple expression of the optimal re-evaluation number. We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions across various noise levels, optimization budgets, and dimensionality. Our method demonstrates significant advantages in terms of the probability of hitting near-optimal function values.