HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks

{Shi Gu Yongqing Hai Shikuang Deng Shanghang Zhang Yufei Guo Yuhang Li}

Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks

Abstract

Spiking Neural Networks (SNNs) have emerged as a biology-inspired method mimicking the spiking nature of brain neurons. This bio-mimicry derives SNNs' energy efficiency of inference on neuromorphic hardware. However, it also causes an intrinsic disadvantage in training high-performing SNNs from scratch since the discrete spike prohibits the gradient calculation. To overcome this issue, the surrogate gradient (SG) approach has been proposed as a continuous relaxation. Yet the heuristic choice of SG leaves it vacant how the SG benefits the SNN training. In this work, we first theoretically study the gradient descent problem in SNN training and introduce finite difference gradient to quantitatively analyze the training behavior of SNN. Based on the introduced finite difference gradient, we propose a new family of Differentiable Spike (Dspike) functions that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation. Extensive experiments over several popular network structures show that training SNN with Dspike consistently outperforms the state-of-the-art training methods. For example, on the CIFAR10-DVS classification task, we can train a spiking ResNet-18 and achieve 75.4% top-1 accuracy with 10 time steps.

Benchmarks

BenchmarkMethodologyMetrics
event-data-classification-on-cifar10-dvs-1Dspike (ResNet-18)
Accuracy: 75.4
image-classification-on-cifar-100Dspike (ResNet-18)
Percentage correct: 74.24
image-classification-on-imagenetDspike (VGG-16)
Top 1 Accuracy: 71.24

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks | Papers | HyperAI