Abstract:Constrained multiobjective optimization has gained much interest in the past few years. However, constrained multiobjective optimization problems (CMOPs) are still unsatisfactorily understood. Consequently, the choice of adequate CMOPs for benchmarking is difficult and lacks a formal background. This paper addresses this issue by exploring CMOPs from a performance space perspective. First, it presents a novel performance assessment approach designed explicitly for constrained multiobjective optimization. This methodology offers a first attempt to simultaneously measure the performance in approximating the Pareto front and constraint satisfaction. Secondly, it proposes an approach to measure the capability of the given optimization problem to differentiate among algorithm performances. Finally, this approach is used to contrast eight frequently used artificial test suites of CMOPs. The experimental results reveal which suites are more efficient in discerning between three well-known multiobjective optimization algorithms. Benchmark designers can use these results to select the most appropriate CMOPs for their needs.
Abstract:Despite the increasing interest in constrained multiobjective optimization in recent years, constrained multiobjective optimization problems (CMOPs) are still unsatisfactory understood and characterized. For this reason, the selection of appropriate CMOPs for benchmarking is difficult and lacks a formal background. We address this issue by extending landscape analysis to constrained multiobjective optimization. By employing four exploratory landscape analysis techniques, we propose 29 landscape features (of which 19 are novel) to characterize CMOPs. These landscape features are then used to compare eight frequently used artificial test suites against a recently proposed suite consisting of real-world problems based on physical models. The experimental results reveal that the artificial test problems fail to adequately represent some realistic characteristics, such as strong negative correlation between the objectives and constraints. Moreover, our findings show that all the studied artificial test suites have advantages and limitations, and that no "perfect" suite exists. Benchmark designers can use the obtained results to select or generate appropriate CMOP instances based on the characteristics they want to explore.
Abstract:This paper proposes a hybrid self-adaptive evolutionary algorithm for graph coloring that is hybridized with the following novel elements: heuristic genotype-phenotype mapping, a swap local search heuristic, and a neutral survivor selection operator. This algorithm was compared with the evolutionary algorithm with the SAW method of Eiben et al., the Tabucol algorithm of Hertz and de Werra, and the hybrid evolutionary algorithm of Galinier and Hao. The performance of these algorithms were tested on a test suite consisting of randomly generated 3-colorable graphs of various structural features, such as graph size, type, edge density, and variability in sizes of color classes. Furthermore, the test graphs were generated including the phase transition where the graphs are hard to color. The purpose of the extensive experimental work was threefold: to investigate the behavior of the tested algorithms in the phase transition, to identify what impact hybridization with the DSatur traditional heuristic has on the evolutionary algorithm, and to show how graph structural features influence the performance of the graph-coloring algorithms. The results indicate that the performance of the hybrid self-adaptive evolutionary algorithm is comparable with, or better than, the performance of the hybrid evolutionary algorithm which is one of the best graph-coloring algorithms today. Moreover, the fact that all the considered algorithms performed poorly on flat graphs confirms that this type of graphs is really the hardest to color.