Abstract:Understanding how biological constraints shape neural computation is a central goal of computational neuroscience. Spatially embedded recurrent neural networks provide a promising avenue to study how modelled constraints shape the combined structural and functional organisation of networks over learning. Prior work has shown that spatially embedded systems like this can combine structure and function into single artificial models during learning. But it remains unclear precisely how, in general, structural constraints bound the range of attainable configurations. In this work, we show that it is possible to study these restrictions through entropic measures of the neural weights and eigenspectrum, across both rate and spiking neural networks. Spatial embedding, in contrast to baseline models, leads to networks with a highly specific low entropy modularity where connectivity is readily interpretable given the known spatial and communication constraints acting on them. Crucially, these networks also demonstrate systematically modulated spectral dynamics, revealing how they exploit heterogeneity in their function to overcome the constraints imposed on their structure. This work deepens our understanding of constrained learning in neural networks, across coding schemes and tasks, where solutions to simultaneous structural and functional objectives must be accomplished in tandem.
Abstract:There is a concerted effort to build domain-general artificial intelligence in the form of universal neural network models with sufficient computational flexibility to solve a wide variety of cognitive tasks but without requiring fine-tuning on individual problem spaces and domains. To do this, models need appropriate priors and inductive biases, such that trained models can generalise to out-of-distribution examples and new problem sets. Here we provide an overview of the hallmarks endowing biological neural networks with the functionality needed for flexible cognition, in order to establish which features might also be important to achieve similar functionality in artificial systems. We specifically discuss the role of system-level distribution of network communication and recurrence, in addition to the role of short-term topological changes for efficient local computation. As machine learning models become more complex, these principles may provide valuable directions in an otherwise vast space of possible architectures. In addition, testing these inductive biases within artificial systems may help us to understand the biological principles underlying domain-general cognition.