Command Palette
Search for a command to run...
S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks
Apolinario Marco Paul E. ; Roy Kaushik

Abstract
Spiking Neural Networks (SNNs) are biologically plausible models that havebeen identified as potentially apt for deploying energy-efficient intelligenceat the edge, particularly for sequential learning tasks. However, training ofSNNs poses significant challenges due to the necessity for precise temporal andspatial credit assignment. Back-propagation through time (BPTT) algorithm,whilst the most widely used method for addressing these issues, incurs highcomputational cost due to its temporal dependency. In this work, we proposeS-TLLR, a novel three-factor temporal local learning rule inspired by theSpike-Timing Dependent Plasticity (STDP) mechanism, aimed at training deep SNNson event-based learning tasks. Furthermore, S-TLLR is designed to have lowmemory and time complexities, which are independent of the number of timesteps, rendering it suitable for online learning on low-power edge devices. Todemonstrate the scalability of our proposed method, we have conducted extensiveevaluations on event-based datasets spanning a wide range of applications, suchas image and gesture recognition, audio classification, and optical flowestimation. In all the experiments, S-TLLR achieved high accuracy, comparableto BPTT, with a reduction in memory between $5-50\times$ andmultiply-accumulate (MAC) operations between $1.3-6.6\times$.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| event-based-optical-flow-on-mvsec | S-TLLR | Average End-Point Error: 3.45 |
| gesture-recognition-on-dvs128-gesture | S-TLLR | Accuracy (%): 97.72 |
| image-classification-on-n-caltech-101 | S-TLLR | Accuracy: 66.05 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.