Abstract:Recent advances in material technology and in micro- and nano-electronics have profoundly changed the design of intracranial electrophysiology electrodes. It is now possible to manufacture electrodes that record cortical activity at a spatial resolution that was previously unthinkable. This high spatial resolution enables recording of the functional structures of the brain, and differentiation of the activity of the different types of neurons composing them. In this paper, we present a review of the different types of electrodes now available, and then suggest one of the first applications for such high resolution electrodes, namely a means to better characterise the mechanisms that generate focal seizures in epileptics. Finally, we reflect more broadly on prospects for their future use.
Abstract:During language acquisition, children follow a typical sequence of learning stages, whereby they first learn to categorize phonemes before they develop their lexicon and eventually master increasingly complex syntactic structures. However, the computational principles that lead to this learning trajectory remain largely unknown. To investigate this, we here compare the learning trajectories of deep language models to those of children. Specifically, we test whether, during its training, GPT-2 exhibits stages of language acquisition comparable to those observed in children aged between 18 months and 6 years. For this, we train 48 GPT-2 models from scratch and evaluate their syntactic and semantic abilities at each training step, using 96 probes curated from the BLiMP, Zorro and BIG-Bench benchmarks. We then compare these evaluations with the behavior of 54 children during language production. Our analyses reveal three main findings. First, similarly to children, the language models tend to learn linguistic skills in a systematic order. Second, this learning scheme is parallel: the language tasks that are learned last improve from the very first training steps. Third, some - but not all - learning stages are shared between children and these language models. Overall, these results shed new light on the principles of language acquisition, and highlight important divergences in how humans and modern algorithms learn to process natural language.