Abstract:Automated Algorithm Selection (AAS) is a popular meta-algorithmic approach and has demonstrated to work well for single-objective optimisation in combination with exploratory landscape features (ELA), i.e., (numerical) descriptive features derived from sampling the black-box (continuous) optimisation problem. In contrast to the abundance of features that describe single-objective optimisation problems, only a few features have been proposed for multi-objective optimisation so far. Building upon recent work on exploratory landscape features for box-constrained continuous multi-objective optimization problems, we propose a novel and complementary set of additional features (MO-ELA). These features are based on a random sample of points considering both the decision and objective space. The features are divided into 5 feature groups depending on how they are being calculated: non-dominated-sorting, descriptive statistics, principal component analysis, graph structures and gradient information. An AAS study conducted on well-established multi-objective benchmarks demonstrates that the proposed features contribute to successfully distinguishing between algorithm performance and thus adequately capture problem hardness resulting in models that come very close to the virtual best solver. After feature selection, the newly proposed features are frequently among the top contributors, underscoring their value in algorithm selection and problem characterisation.
Abstract:Benchmarking has driven scientific progress in Evolutionary Computation, yet current practices fall short of real-world needs. Widely used synthetic suites such as BBOB and CEC isolate algorithmic phenomena but poorly reflect the structure, constraints, and information limitations of continuous and mixed-integer optimization problems in practice. This disconnect leads to the misuse of benchmarking suites for competitions, automated algorithm selection, and industrial decision-making, despite these suites being designed for different purposes. We identify key gaps in current benchmarking practices and tooling, including limited availability of real-world-inspired problems, missing high-level features, and challenges in multi-objective and noisy settings. We propose a vision centered on curated real-world-inspired benchmarks, practitioner-accessible feature spaces and community-maintained performance databases. Real progress requires coordinated effort: A living benchmarking ecosystem that evolves with real-world insights and supports both scientific understanding and industrial use.




Abstract:Benchmarking is one of the key ways in which we can gain insight into the strengths and weaknesses of optimization algorithms. In sampling-based optimization, considering the anytime behavior of an algorithm can provide valuable insights for further developments. In the context of multi-objective optimization, this anytime perspective is not as widely adopted as in the single-objective context. In this paper, we propose a new software tool which uses principles from unbounded archiving as a logging structure. This leads to a clearer separation between experimental design and subsequent analysis decisions. We integrate this approach as a new Python module into the IOHprofiler framework and demonstrate the benefits of this approach by showcasing the ability to change indicators, aggregations, and ranking procedures during the analysis pipeline.