Abstract:Gomez proposes a formal and systematic approach for characterizing stochastic global optimization algorithms. Using it, Gomez formalizes algorithms with a fixed next-population stochastic method, i.e., algorithms defined as stationary Markov processes. These are the cases of standard versions of hill-climbing, parallel hill-climbing, generational genetic, steady-state genetic, and differential evolution algorithms. This paper continues such a systematic formal approach. First, we generalize the sufficient conditions convergence lemma from stationary to non-stationary Markov processes. Second, we develop Markov kernels for some selection schemes. Finally, we formalize both simulated-annealing and evolutionary-strategies using the systematic formal approach.
Abstract:In this paper, we develop a set of genetic programming operators and an initialization population process based on concepts of functional programming rewriting for boosting inductive genetic programming. Such genetic operators are used within a hybrid adaptive evolutionary algorithm that evolves operator rates at the same time it evolves the solution. Solutions are represented using recursive functions where genome is encoded as an ordered list of trees and phenotype is written in a simple functional programming language that uses rewriting as operational semantic (computational model). The fitness is the number of examples successfully deduced over the cardinal of the set of examples. Parents are selected following a tournament selection mechanism and the next population is obtained following a steady-state strategy. The evolutionary process can use some previous functions (programs) induced as background knowledge. We compare the performance of our technique in a set of hard problems (for classical genetic programming). In particular, we take as test-bed the problem of obtaining equivalent algebraic expressions of some notable products (such as square of a binomial, and cube of a binomial), and the recursive formulas of sum of the first n and squares of the first n natural numbers.
Abstract:The major difficulty in Multi-objective Optimization Evolutionary Algorithms (MOEAs) is how to find an appropriate solution that is able to converge towards the true Pareto Front with high diversity. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in the dependency of the selection of the EA parameters. Moreover, the process of setting such parameters is considered time-consuming, and several research works have tried to deal with this problem. This paper proposed a new Multi-objective Algorithm as an extension of the Hybrid Adaptive Evolutionary algorithm (HAEA) called MoHAEA. MoHAEA allows dynamic adaptation of the application of operator probabilities (rates) to evolve with the solution of the multi-objective problems combining the dominance- and decomposition-based approaches. MoHAEA is compared with four states of the art MOEAs, namely MOEA/D, pa$\lambda$-MOEA/D, MOEA/D-AWA, and NSGA-II on ten widely used multi-objective test problems. Experimental results indicate that MoHAEA outperforms the benchmark algorithms in terms of how it is able to find a well-covered and well-distributed set of points on the Pareto Front.
Abstract:As we know, some global optimization problems cannot be solved using analytic methods, so numeric/algorithmic approaches are used to find near to the optimal solutions for them. A stochastic global optimization algorithm (SGoal) is an iterative algorithm that generates a new population (a set of candidate solutions) from a previous population using stochastic operations. Although some research works have formalized SGoals using Markov kernels, such formalization is not general and sometimes is blurred. In this paper, we propose a comprehensive and systematic formal approach for studying SGoals. First, we present the required theory of probability (\sigma-algebras, measurable functions, kernel, markov chain, products, convergence and so on) and prove that some algorithmic functions like swapping and projection can be represented by kernels. Then, we introduce the notion of join-kernel as a way of characterizing the combination of stochastic methods. Next, we define the optimization space, a formal structure (a set with a \sigma-algebra that contains strict \epsilon-optimal states) for studying SGoals, and we develop kernels, like sort and permutation, on such structure. Finally, we present some popular SGoals in terms of the developed theory, we introduce sufficient conditions for convergence of a SGoal, and we prove convergence of some popular SGoals.