Abstract:Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. However, it remains a challenge to train deep SNNs due to the discrete spike function. A popular approach to circumvent this challenge is ANN-to-SNN conversion. However, due to the quantization error and accumulating error, it often requires lots of time steps (high inference latency) to achieve high performance, which negates SNN's advantages. To this end, this paper proposes Fast-SNN that achieves high performance with low latency. We demonstrate the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which the minimization of the quantization error is transferred to quantized ANN training. With the minimization of the quantization error, we show that the sequential error is the primary cause of the accumulating error, which is addressed by introducing a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. Codes are available at: https://github.com/yangfan-hu/Fast-SNN.
Abstract:Recently, spiking neural network (SNN) has received significant attentions for its biological plausibility. SNN theoretically has at least the same computational power as traditional artificial neural networks (ANNs), and it has the potential to achieve revolutionary energy-efficiency. However, at current stage, it is still a big challenge to train a very deep SNN. In this paper, we propose an efficient approach to build a spiking version of deep residual network (ResNet), which represents the state-of-the-art convolutional neural networks (CNNs). We employ the idea of converting a trained ResNet to a network of spiking neurons named Spiking ResNet. To address the conversion problem, we propose a shortcut normalisation mechanism to appropriately scale continuous-valued activations to match firing rates in SNN, and a layer-wise error compensation approach to reduce the error caused by discretisation. Experimental results on MNIST, CIFAR-10, and CIFAR-100 demonstrate that the proposed Spiking ResNet yields the state-of-the-art performance of SNNs.