Abstract:In certain situations, neural networks will represent environment states in their hidden activations. Our goal is to visualize what environment states the networks are representing. We experiment with a recurrent neural network (RNN) architecture with a decoder network at the end. After training, we apply the decoder to the intermediate representations of the network to visualize what they represent. We define a quantitative interpretability metric and use it to demonstrate that hidden states can be highly interpretable on a simple task. We also develop autoencoder and adversarial techniques and show that benefit interpretability.
Abstract:Hard optimisation problems such as Boolean Satisfiability typically have long solving times and can usually be solved by many algorithms, although the performance can vary widely in practice. Research has shown that no single algorithm outperforms all the others; thus, it is crucial to select the best algorithm for a given problem. Supervised machine learning models can accurately predict which solver is best for a given problem, but they require first to run every solver in the portfolio for all examples available to create labelled data. As this approach cannot scale, we developed an active learning framework that addresses this problem by constructing an optimal training set, so that the learner can achieve higher or equal performances with less training data. Our work proves that active learning is beneficial for algorithm selection techniques and provides practical guidance to incorporate into existing systems.