ETIS
Abstract:In a large variety of systems (biological, physical, social etc.), synchronization occurs when different oscillating objects tune their rhythm when they interact with each other. The different underlying network defining the connectivity properties among these objects drives the global dynamics in a complex fashion and affects the global degree of synchrony of the system. Here we study the impact of such types of different network architectures, such as Fully-Connected, Random, Regular ring lattice graph, Small-World and Scale-Free in the global dynamical activity of a system of coupled Kuramoto phase oscillators. We fix the external stimulation parameters and we measure the global degree of synchrony when different fractions of nodes receive stimulus. These nodes are chosen either randomly or based on their respective strong/weak connectivity properties (centrality, shortest path length and clustering coefficient). Our main finding is, that in Scale-Free and Random networks a sophisticated choice of nodes based on their eigenvector centrality and average shortest path length exhibits a systematic trend in achieving higher degree of synchrony. However, this trend does not occur when using the clustering coefficient as a criterion. For the other types of graphs considered, the choice of the stimulated nodes (randomly vs selectively using the aforementioned criteria) does not seem to have a noticeable effect.
Abstract:In this article, we propose a variational inference formulation of auto-associative memories, allowing us to combine perceptual inference and memory retrieval into the same mathematical framework. In this formulation, the prior probability distribution onto latent representations is made memory dependent, thus pulling the inference process towards previously stored representations. We then study how different neural network approaches to variational inference can be applied in this framework. We compare methods relying on amortized inference such as Variational Auto Encoders and methods relying on iterative inference such as Predictive Coding and suggest combining both approaches to design new auto-associative memory models. We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models such as Hopfield Networks, End-to-End Memory Networks and Neural Turing Machines.
Abstract:As a phenomenon in dynamical systems allowing autonomous switching between stable behaviors, chaotic itinerancy has gained interest in neurorobotics research. In this study, we draw a connection between this phenomenon and the predictive coding theory by showing how a recurrent neural network implementing predictive coding can generate neural trajectories similar to chaotic itinerancy in the presence of input noise. We propose two scenarios generating random and past-independent attractor switching trajectories using our model.
Abstract:In this work, we build upon the Active Inference (AIF) and Predictive Coding (PC) frameworks to propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories. We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories. We furthermore inquire the effects of bidirectional interactions between the motor and the visual modules. The architecture is tested on the control of a simulated robotic arm learning to reproduce handwritten letters.
Abstract:In order to keep trace of information, the brain has to resolve the problem where information is and how to index new ones. We propose that the neural mechanism used by the prefrontal cortex (PFC) to detect structure in temporal sequences, based on the temporal order of incoming information, has served as second purpose to the spatial ordering and indexing of brain networks. We call this process, apparent to the manipulation of neural 'addresses' to organize the brain's own network, the 'digitalization' of information. Such tool is important for information processing and preservation, but also for memory formation and retrieval.
Abstract:In this article, we apply the Free-Energy Principle to the question of motor primitives learning. An echo-state network is used to generate motor trajectories. We combine this network with a perception module and a controller that can influence its dynamics. This new compound network permits the autonomous learning of a repertoire of motor trajectories. To evaluate the repertoires built with our method, we exploit them in a handwriting task where primitives are chained to produce long-range sequences.