Abstract:Hopfield networks are an attractive choice for solving many types of computational problems because they provide a biologically plausible mechanism. The Self-Optimization (SO) model adds to the Hopfield network by using a biologically founded Hebbian learning rule, in combination with repeated network resets to arbitrary initial states, for optimizing its own behavior towards some desirable goal state encoded in the network. In order to better understand that process, we demonstrate first that the SO model can solve concrete combinatorial problems in SAT form, using two examples of the Liars problem and the map coloring problem. In addition, we show how under some conditions critical information might get lost forever with the learned network producing seemingly optimal solutions that are in fact inappropriate for the problem it was tasked to solve. What appears to be an undesirable side-effect of the SO model, can provide insight into its process for solving intractable problems.
Abstract:The Self-Optimization (SO) model is a useful computational model for investigating self-organization in "soft" Artificial life (ALife) as it has been shown to be general enough to model various complex adaptive systems. So far, existing work has been done on relatively small network sizes, precluding the investigation of novel phenomena that might emerge from the complexity arising from large numbers of nodes interacting in interconnected networks. This work introduces a novel implementation of the SO model that scales as $\mathcal{O}\left(N^{2}\right)$ with respect to the number of nodes $N$, and demonstrates the applicability of the SO model to networks with system sizes several orders of magnitude higher than previously was investigated. Removing the prohibitive computational cost of the naive $\mathcal{O}\left(N^{3}\right)$ algorithm, our on-the-fly computation paves the way for investigating substantially larger system sizes, allowing for more variety and complexity in future studies.
Abstract:The nervous system of the nematode soil worm Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size. A general challenge is to better understand the relationship between neural organization and neural activity at the system level, including the functional roles of inhibitory connections. Here we implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron, and we explored the functional role of these physiological differences for neural activity. In particular, we created a Hopfield neural network in which all of the worm's neurons characterized by inhibitory neurotransmitters are assigned inhibitory outgoing connections. Then, we created a control condition in which the same number of inhibitory connections are arbitrarily distributed across the network. A comparison of these two conditions revealed that the biological distribution of inhibitory connections facilitates the self-optimization of coordinated neural activity compared with an arbitrary distribution of inhibitory connections.
Abstract:The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfy constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains
Abstract:Due to recent advances in synthetic biology and artificial life, the origin of life is currently a hot topic of research. We review the literature and argue that the two traditionally competing "replicator-first" and "metabolism-first" approaches are merging into one integrated theory of individuation and evolution. We contribute to the maturation of this more inclusive approach by highlighting some problematic assumptions that still lead to an impoverished conception of the phenomenon of life. In particular, we argue that the new consensus has so far failed to consider the relevance of intermediate timescales. We propose that an adequate theory of life must account for the fact that all living beings are situated in at least four distinct timescales, which are typically associated with metabolism, motility, development, and evolution. On this view, self-movement, adaptive behavior and morphological changes could have already been present at the origin of life. In order to illustrate this possibility we analyze a minimal model of life-like phenomena, namely of precarious, individuated, dissipative structures that can be found in simple reaction-diffusion systems. Based on our analysis we suggest that processes in intermediate timescales could have already been operative in prebiotic systems. They may have facilitated and constrained changes occurring in the faster- and slower-paced timescales of chemical self-individuation and evolution by natural selection, respectively.
Abstract:The extended mind hypothesis has stimulated much interest in cognitive science. However, its core claim, i.e. that the process of cognition can extend beyond the brain via the body and into the environment, has been heavily criticized. A prominent critique of this claim holds that when some part of the world is coupled to a cognitive system this does not necessarily entail that the part is also constitutive of that cognitive system. This critique is known as the "coupling-constitution fallacy". In this paper we respond to this reductionist challenge by using an evolutionary robotics approach to create a minimal model of two acoustically coupled agents. We demonstrate how the interaction process as a whole has properties that cannot be reduced to the contributions of the isolated agents. We also show that the neural dynamics of the coupled agents has formal properties that are inherently impossible for those neural networks in isolation. By keeping the complexity of the model to an absolute minimum, we are able to illustrate how the coupling-constitution fallacy is in fact based on an inadequate understanding of the constitutive role of nonlinear interactions in dynamical systems theory.