Abstract:Quality diversity (QD) algorithms have shown to provide sets of high quality solutions for challenging problems in robotics, games, and combinatorial optimisation. So far, theoretical foundational explaining their good behaviour in practice lack far behind their practical success. We contribute to the theoretical understanding of these algorithms and study the behaviour of QD algorithms for a classical planning problem seeking several solutions. We study the all-pairs-shortest-paths (APSP) problem which gives a natural formulation of the behavioural space based on all pairs of nodes of the given input graph that can be used by Map-Elites QD algorithms. Our results show that Map-Elites QD algorithms are able to compute a shortest path for each pair of nodes efficiently in parallel. Furthermore, we examine parent selection techniques for crossover that exhibit significant speed ups compared to the standard QD approach.
Abstract:In real-world applications, users often favor structurally diverse design choices over one high-quality solution. It is hence important to consider more solutions that decision-makers can compare and further explore based on additional criteria. Alongside the existing approaches of evolutionary diversity optimization, quality diversity, and multimodal optimization, this paper presents a fresh perspective on this challenge by considering the problem of identifying a fixed number of solutions with a pairwise distance above a specified threshold while maximizing their average quality. We obtain first insight into these objectives by performing a subset selection on the search trajectories of different well-established search heuristics, whether specifically designed with diversity in mind or not. We emphasize that the main goal of our work is not to present a new algorithm but to look at the problem in a more fundamental and theoretically tractable way by asking the question: What trade-off exists between the minimum distance within batches of solutions and the average quality of their fitness? These insights also provide us with a way of making general claims concerning the properties of optimization problems that shall be useful in turn for benchmarking algorithms of the approaches enumerated above. A possibly surprising outcome of our empirical study is the observation that naive uniform random sampling establishes a very strong baseline for our problem, hardly ever outperformed by the search trajectories of the considered heuristics. We interpret these results as a motivation to develop algorithms tailored to produce diverse solutions of high average quality.
Abstract:The main goal of diversity optimization is to find a diverse set of solutions which satisfy some lower bound on their fitness. Evolutionary algorithms (EAs) are often used for such tasks, since they are naturally designed to optimize populations of solutions. This approach to diversity optimization, called EDO, has been previously studied from theoretical perspective, but most studies considered only EAs with a trivial offspring population such as the $(\mu + 1)$ EA. In this paper we give an example instance of a $k$-vertex cover problem, which highlights a critical difference of the diversity optimization from the regular single-objective optimization, namely that there might be a locally optimal population from which we can escape only by replacing at least two individuals at once, which the $(\mu + 1)$ algorithms cannot do. We also show that the $(\mu + \lambda)$ EA with $\lambda \ge \mu$ can effectively find a diverse population on $k$-vertex cover, if using a mutation operator inspired by Branson and Sutton (TCS 2023). To avoid the problem of subset selection which arises in the $(\mu + \lambda)$ EA when it optimizes diversity, we also propose the $(1_\mu + 1_\mu)$ EA$_D$, which is an analogue of the $(1 + 1)$ EA for populations, and which is also efficient at optimizing diversity on the $k$-vertex cover problem.
Abstract:Chance-constrained problems involve stochastic components in the constraints which can be violated with a small probability. We investigate the impact of different types of chance constraints on the performance of iterative search algorithms and study the classical maximum coverage problem in graphs with chance constraints. Our goal is to evolve reliable chance constraint settings for a given graph where the performance of algorithms differs significantly not just in expectation but with high confidence. This allows to better learn and understand how different types of algorithms can deal with different types of constraint settings and supports automatic algorithm selection. We develop an evolutionary algorithm that provides sets of chance constraints that differentiate the performance of two stochastic search algorithms with high confidence. We initially use traditional approximation ratio as the fitness function of (1+1)~EA to evolve instances, which shows inadequacy to generate reliable instances. To address this issue, we introduce a new measure to calculate the performance difference for two algorithms, which considers variances of performance ratios. Our experiments show that our approach is highly successful in solving the instability issue of the performance ratios and leads to evolving reliable sets of chance constraints with significantly different performance for various types of algorithms.
Abstract:The diversity optimization is the class of optimization problems, in which we aim at finding a diverse set of good solutions. One of the frequently used approaches to solve such problems is to use evolutionary algorithms which evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyse EDO on a 3-objective function LOTZ$_k$, which is a modification of the 2-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in $O(kn^3)$ expected iterations. We also analyze the runtime of the GSEMO$_D$ (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures, the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMO$_D$ optimizes it asymptotically faster than it finds a Pareto-optimal population, in $O(kn^2\log(n))$ expected iterations, and for the second measure we show an upper bound of $O(k^2n^3\log(n))$ expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures that is close to the theory predictions.
Abstract:Recently surrogate functions based on the tail inequalities were developed to evaluate the chance constraints in the context of evolutionary computation and several Pareto optimization algorithms using these surrogates were successfully applied in optimizing chance-constrained monotone submodular problems. However, the difference in performance between algorithms using the surrogates and those employing the direct sampling-based evaluation remains unclear. Within the paper, a sampling-based method is proposed to directly evaluate the chance constraint. Furthermore, to address the problems with more challenging settings, an enhanced GSEMO algorithm integrated with an adaptive sliding window, called ASW-GSEMO, is introduced. In the experiments, the ASW-GSEMO employing the sampling-based approach is tested on the chance-constrained version of the maximum coverage problem with different settings. Its results are compared with those from other algorithms using different surrogate functions. The experimental findings indicate that the ASW-GSEMO with the sampling-based evaluation approach outperforms other algorithms, highlighting that the performances of algorithms using different evaluation methods are comparable. Additionally, the behaviors of ASW-GSEMO are visualized to explain the distinctions between it and the algorithms utilizing the surrogate functions.
Abstract:Evolutionary algorithms are particularly effective for optimisation problems with dynamic and stochastic components. We propose multi-objective evolutionary approaches for the knapsack problem with stochastic profits under static and dynamic weight constraints. The chance-constrained problem model allows us to effectively capture the stochastic profits and associate a confidence level to the solutions' profits. We consider a bi-objective formulation that maximises expected profit and minimises variance, which allows optimising the problem independent of a specific confidence level on the profit. We derive a three-objective formulation by relaxing the weight constraint into an additional objective. We consider the GSEMO algorithm with standard and a sliding window-based parent selection to evaluate the objective formulations. Moreover, we modify fitness formulations and algorithms for the dynamic problem variant to store some infeasible solutions to cater to future changes. We conduct experimental investigations on both problems using the proposed problem formulations and algorithms. Our results show that three-objective approaches outperform approaches that use bi-objective formulations, and they further improve when GSEMO uses sliding window selection.
Abstract:We consider whether conditions exist under which block-coordinate descent is asymptotically efficient in evolutionary multi-objective optimization, addressing an open problem. Block-coordinate descent, where an optimization problem is decomposed into $k$ blocks of decision variables and each of the blocks is optimized (with the others fixed) in a sequence, is a technique used in some large-scale optimization problems such as airline scheduling, however its use in multi-objective optimization is less studied. We propose a block-coordinate version of GSEMO and compare its running time to the standard GSEMO algorithm. Theoretical and empirical results on a bi-objective test function, a variant of LOTZ, serve to demonstrate the existence of cases where block-coordinate descent is faster. The result may yield wider insights into this class of algorithms.
Abstract:Real-world optimization problems often involve stochastic and dynamic components. Evolutionary algorithms are particularly effective in these scenarios, as they can easily adapt to uncertain and changing environments but often uncertainty and dynamic changes are studied in isolation. In this paper, we explore the use of 3-objective evolutionary algorithms for the chance constrained knapsack problem with dynamic constraints. In our setting, the weights of the items are stochastic and the knapsack's capacity changes over time. We introduce a 3-objective formulation that is able to deal with the stochastic and dynamic components at the same time and is independent of the confidence level required for the constraint. This new approach is then compared to the 2-objective formulation which is limited to a single confidence level. We evaluate the approach using two different multi-objective evolutionary algorithms (MOEAs), namely the global simple evolutionary multi-objective optimizer (GSEMO) and the multi-objective evolutionary algorithm based on decomposition (MOEA/D), across various benchmark scenarios. Our analysis highlights the advantages of the 3-objective formulation over the 2-objective formulation in addressing the dynamic chance constrained knapsack problem.
Abstract:Creating diverse sets of high quality solutions has become an important problem in recent years. Previous works on diverse solutions problems consider solutions' objective quality and diversity where one is regarded as the optimization goal and the other as the constraint. In this paper, we treat this problem as a bi-objective optimization problem, which is to obtain a range of quality-diversity trade-offs. To address this problem, we frame the evolutionary process as evolving a population of populations, and present a suitable general implementation scheme that is compatible with existing evolutionary multi-objective search methods. We realize the scheme in NSGA-II and SPEA2, and test the methods on various instances of maximum coverage, maximum cut and minimum vertex cover problems. The resulting non-dominated populations exhibit rich qualitative features, giving insights into the optimization instances and the quality-diversity trade-offs they induce.