Abstract:It has been argued by Thom and Palm that sparsely-connected neural networks (SCNs) show improved performance over fully-connected networks (FCNs). Super-regular networks (SRNs) are neural networks composed of a set of stacked sparse layers of (epsilon, delta)-super-regular pairs, and randomly permuted node order. Using the Blow-up Lemma, we prove that as a result of the individual super-regularity of each pair of layers, SRNs guarantee a number of properties that make them suitable replacements for FCNs for many tasks. These guarantees include edge uniformity across all large-enough subsets, minimum node in- and out-degree, input-output sensitivity, and the ability to embed pre-trained constructs. Indeed, SRNs have the capacity to act like FCNs, and eliminate the need for costly regularization schemes like Dropout. We show that SRNs perform similarly to X-Nets via readily reproducible experiments, and offer far greater guarantees and control over network structure.
Abstract:Ortus is a simple virtual organism that also serves as an initial framework for investigating and developing biologically-based artificial intelligence. Born from a goal to create complex virtual intelligence and an initial attempt to model C. elegans, Ortus implements a number of mechanisms observed in organic nervous systems, and attempts to fill in unknowns based upon plausible biological implementations and psychological observations. Implemented mechanisms include excitatory and inhibitory chemical synapses, bidirectional gap junctions, and Hebbian learning with its Stentian extension. We present an initial experiment that showcases Ortus' fundamental principles; specifically, a cyclic respiratory circuit, and emotionally-driven associative learning with respect to an input stimulus. Finally, we discuss the implications and future directions for Ortus and similar systems.