Abstract:One goal of general intelligence is to learn novel information without overwriting prior learning. The utility of learning without forgetting (CF) is twofold: first, the system can return to previously learned tasks after learning something new. In addition, bootstrapping previous knowledge may allow for faster learning of a novel task. Previous approaches to CF and bootstrapping are primarily based on modifying learning in the form of changing weights to tune the model to the current task, overwriting previously tuned weights from previous tasks.However, another critical factor that has been largely overlooked is the initial network topology, or architecture. Here, we argue that the topology of biological brains likely evolved certain features that are designed to achieve this kind of informational conservation. In particular, we consider that the highly conserved property of modularity may offer a solution to weight-update learning methods that adheres to the learning without catastrophic forgetting and bootstrapping constraints. Final considerations are then made on how to combine these two learning objectives in a dynamical, general learning system.