Abstract:A common goal in evolutionary multi-objective optimization is to find suitable finite-size approximations of the Pareto front of a given multi-objective optimization problem. While many multi-objective evolutionary algorithms have proven to be very efficient in finding good Pareto front approximations, they may need quite a few resources or may even fail to obtain optimal or nearly approximations. Hereby, optimality is implicitly defined by the chosen performance indicator. In this work, we propose a set-based Newton method for Hausdorff approximations of the Pareto front to be used within multi-objective evolutionary algorithms. To this end, we first generalize the previously proposed Newton step for the performance indicator for the treatment of constrained problems for general reference sets. To approximate the target Pareto front, we propose a particular strategy for generating the reference set that utilizes the data gathered by the evolutionary algorithm during its run. Finally, we show the benefit of the Newton method as a post-processing step on several benchmark test functions and different base evolutionary algorithms.
Abstract:Neural networks and deep learning are changing the way that artificial intelligence is being done. Efficiently choosing a suitable network architecture and fine-tune its hyper-parameters for a specific dataset is a time-consuming task given the staggering number of possible alternatives. In this paper, we address the problem of model selection by means of a fully automated framework for efficiently selecting a neural network model for a given task: classification or regression. The algorithm, named Automatic Model Selection, is a modified micro-genetic algorithm that automatically and efficiently finds the most suitable neural network model for a given dataset. The main contributions of this method are a simple list based encoding for neural networks as genotypes in an evolutionary algorithm, new crossover, and mutation operators, the introduction of a fitness function that considers both, the accuracy of the model and its complexity and a method to measure the similarity between two neural networks. AMS is evaluated on two different datasets. By comparing some models obtained with AMS to state-of-the-art models for each dataset we show that AMS can automatically find efficient neural network models. Furthermore, AMS is computationally efficient and can make use of distributed computing paradigms to further boost its performance.
Abstract:This paper presents a framework for estimating the remaining useful life (RUL) of mechanical systems. The framework consists of a multi-layer perceptron and an evolutionary algorithm for optimizing the data-related parameters. The framework makes use of a strided time window to estimate the RUL for mechanical components. Tuning the data-related parameters can become a very time consuming task. The framework presented here automatically reshapes the data such that the efficiency of the model is increased. Furthermore, the complexity of the model is kept low, e.g. neural networks with few hidden layers and few neurons at each layer. Having simple models has several advantages like short training times and the capacity of being in environments with limited computational resources such as embedded systems. The proposed method is evaluated on the publicly available C-MAPSS dataset, its accuracy is compared against other state-of-the art methods for the same dataset.