Abstract:Human reasoning can distill principles from observed patterns and generalize them to explain and solve novel problems. The most powerful artificial intelligence systems lack explainability and symbolic reasoning ability, and have therefore not achieved supremacy in domains requiring human understanding, such as science or common sense reasoning. Here we introduce deep distilling, a machine learning method that learns patterns from data using explainable deep learning and then condenses it into concise, executable computer code. The code, which can contain loops, nested logical statements, and useful intermediate variables, is equivalent to the neural network but is generally orders of magnitude more compact and human-comprehensible. On a diverse set of problems involving arithmetic, computer vision, and optimization, we show that deep distilling generates concise code that generalizes out-of-distribution to solve problems orders-of-magnitude larger and more complex than the training data. For problems with a known ground-truth rule set, deep distilling discovers the rule set exactly with scalable guarantees. For problems that are ambiguous or computationally intractable, the distilled rules are similar to existing human-derived algorithms and perform at par or better. Our approach demonstrates that unassisted machine intelligence can build generalizable and intuitive rules explaining patterns in large datasets that would otherwise overwhelm human reasoning.
Abstract:How perception and reasoning arise from neuronal network activity is poorly understood. This is reflected in the fundamental limitations of connectionist artificial intelligence, typified by deep neural networks trained via gradient-based optimization. Despite success on many tasks, such networks remain unexplainable black boxes incapable of symbolic reasoning and concept generalization. Here we show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks. To demonstrate, we implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity. On a range of tasks including NP-hard problems, their reasoning capabilities grant additional cognitive functions, like deliberating through self-analysis, tolerating adversarial attacks, and learning transferable rules from simple examples to solve problems of unencountered complexity. The networks also naturally display properties of biological nervous systems inherently absent in current deep neural networks, including sparsity, modularity, and both distributed and localized firing patterns. Because they do not sacrifice performance, compactness, or training time on standard learning tasks, these networks provide a new black-box-free approach to artificial intelligence. They likewise serve as a quantitative framework to understand the emergence of cognition from neuronal networks.