Abstract:Multimodality is one of the biggest difficulties for optimization as local optima are often preventing algorithms from making progress. This does not only challenge local strategies that can get stuck. It also hinders meta-heuristics like evolutionary algorithms in convergence to the global optimum. In this paper we present a new concept of gradient descent, which is able to escape local traps. It relies on multiobjectivization of the original problem and applies the recently proposed and here slightly modified multi-objective local search mechanism MOGSA. We use a sophisticated visualization technique for multi-objective problems to prove the working principle of our idea. As such, this work highlights the transfer of new insights from the multi-objective to the single-objective domain and provides first visual evidence that multiobjectivization can link single-objective local optima in multimodal landscapes.
Abstract:When dealing with continuous single-objective problems, multimodality poses one of the biggest difficulties for global optimization. Local optima are often preventing algorithms from making progress and thus pose a severe threat. In this paper we analyze how single-objective optimization can benefit from multiobjectivization by considering an additional objective. With the use of a sophisticated visualization technique based on the multi-objective gradients, the properties of the arising multi-objective landscapes are illustrated and examined. We will empirically show that the multi-objective optimizer MOGSA is able to exploit these properties to overcome local traps. The performance of MOGSA is assessed on a testbed of several functions provided by the COCO platform. The results are compared to the local optimizer Nelder-Mead.