Abstract:Black-box optimization (BBO) algorithms are concerned with finding the best solutions for the problems with missing analytical details. Most classical methods for such problems are based on strong and fixed \emph{a priori} assumptions such as Gaussian distribution. However, lots of complex real-world problems are far from the \emph{a priori} distribution, bringing some unexpected obstacles to these methods. In this paper, we present an optimizer using generative adversarial nets (OPT-GAN) to guide search on black-box problems via estimating the distribution of optima. The method learns the extensive distribution of the optimal region dominated by selective candidates. Experiments demonstrate that OPT-GAN outperforms other classical BBO algorithms, in particular the ones with Gaussian assumptions.
Abstract:Classical benchmark problems utilize multiple transformation techniques to increase optimization difficulty, e.g., shift for anti centering effect and rotation for anti dimension sensitivity. Despite testing the transformation invariance, however, such operations do not really change the landscape's "shape", but rather than change the "view point". For instance, after rotated, ill conditional problems are turned around in terms of orientation but still keep proportional components, which, to some extent, does not create much obstacle in optimization. In this paper, inspired from image processing, we investigate a novel graphic conformal mapping transformation on benchmark problems to deform the function shape. The bending operation does not alter the function basic properties, e.g., a unimodal function can almost maintain its unimodality after bent, but can modify the shape of interested area in the search space. Experiments indicate the same optimizer spends more search budget and encounter more failures on the conformal bent functions than the rotated version. Several parameters of the proposed function are also analyzed to reveal performance sensitivity of the evolutionary algorithms.
Abstract:Neuro-encoded expression programming that aims to offer a novel continuous representation of combinatorial encoding for genetic programming methods is proposed in this paper. Genetic programming with linear representation uses nature-inspired operators to tune expressions and finally search out the best explicit function to simulate data. The encoding mechanism is essential for genetic programmings to find a desirable solution efficiently. However, the linear representation methods manipulate the expression tree in discrete solution space, where a small change of the input can cause a large change of the output. The unsmooth landscapes destroy the local information and make difficulty in searching. The neuro-encoded expression programming constructs the gene string with recurrent neural network (RNN) and the weights of the network are optimized by powerful continuous evolutionary algorithms. The neural network mappings smoothen the sharp fitness landscape and provide rich neighborhood information to find the best expression. The experiments indicate that the novel approach improves test accuracy and efficiency on several well-known symbolic regression problems.