Direct Training for Spiking Neural Networks: Faster, Larger, Better

Authors

  • Yujie Wu Tsinghua University
  • Lei Deng University of California, Santa Barbara
  • Guoqi Li Tsinghua University
  • Jun Zhu Tsinghua University
  • Yuan Xie University of California, Santa Barbara
  • Luping Shi Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v33i01.33011311

Abstract

Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention. Yet now, SNNs have not shown competitive performance compared with artificial neural networks (ANNs), due to the lack of effective learning algorithms and efficient programming frameworks. We address this issue from two aspects: (1) We propose a neuron normalization technique to adjust the neural selectivity and develop a direct learning algorithm for deep SNNs. (2) Via narrowing the rate coding window and converting the leaky integrate-and-fire (LIF) model into an explicitly iterative version, we present a Pytorch-based implementation method towards the training of large-scale SNNs. In this way, we are able to train deep SNNs with tens of times speedup. As a result, we achieve significantly better accuracy than the reported works on neuromorphic datasets (N-MNIST and DVSCIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets (CIFAR10). To our best knowledge, this is the first work that demonstrates direct training of deep SNNs with high performance on CIFAR10, and the efficient implementation provides a new way to explore the potential of SNNs.

Downloads

Published

2019-07-17

How to Cite

Wu, Y., Deng, L., Li, G., Zhu, J., Xie, Y., & Shi, L. (2019). Direct Training for Spiking Neural Networks: Faster, Larger, Better. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1311-1318. https://doi.org/10.1609/aaai.v33i01.33011311

Issue

Section

AAAI Technical Track: Cognitive Modeling