Abstract:The adaptive leaky integrate-and-fire (ALIF) model is fundamental within computational neuroscience and has been instrumental in studying our brains $\textit{in silico}$. Due to the sequential nature of simulating these neural models, a commonly faced issue is the speed-accuracy trade-off: either accurately simulate a neuron using a small discretisation time-step (DT), which is slow, or more quickly simulate a neuron using a larger DT and incur a loss in simulation accuracy. Here we provide a solution to this dilemma, by algorithmically reinterpreting the ALIF model, reducing the sequential simulation complexity and permitting a more efficient parallelisation on GPUs. We computationally validate our implementation to obtain over a $50\times$ training speedup using small DTs on synthetic benchmarks. We also obtained a comparable performance to the standard ALIF implementation on different supervised classification tasks - yet in a fraction of the training time. Lastly, we showcase how our model makes it possible to quickly and accurately fit real electrophysiological recordings of cortical neurons, where very fine sub-millisecond DTs are crucial for capturing exact spike timing.
Abstract:Spiking neural networks (SNN) are a type of artificial network inspired by the use of action potentials in the brain. There is a growing interest in emulating these networks on neuromorphic computers due to their improved energy consumption and speed, which are the main scaling issues of their counterpart the artificial neural network (ANN). Significant progress has been made in directly training SNNs to perform on par with ANNs in terms of accuracy. These methods are however slow due to their sequential nature, leading to long training times. We propose a new technique for directly training single-spike-per-neuron SNNs which eliminates all sequential computation and relies exclusively on vectorised operations. We demonstrate over a $\times 10$ speedup in training with robust classification performance on real datasets of low to medium spatio-temporal complexity (Fashion-MNIST and Neuromophic-MNIST). Our proposed solution manages to solve certain tasks with over a $95.68 \%$ reduction in spike counts relative to a conventionally trained SNN, which could significantly reduce energy requirements when deployed on neuromorphic computers.
Abstract:Deep artificial neural networks require a large corpus of training data in order to effectively learn, where collection of such training data is often expensive and laborious. Data augmentation overcomes this issue by artificially inflating the training set with label preserving transformations. Recently there has been extensive use of generic data augmentation to improve Convolutional Neural Network (CNN) task performance. This study benchmarks various popular data augmentation schemes to allow researchers to make informed decisions as to which training methods are most appropriate for their data sets. Various geometric and photometric schemes are evaluated on a coarse-grained data set using a relatively simple CNN. Experimental results, run using 4-fold cross-validation and reported in terms of Top-1 and Top-5 accuracy, indicate that cropping in geometric augmentation significantly increases CNN task performance.