DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks
Author(s): Shi, C (Shi, Cong); Wang, TX (Wang, Tengxiao); He, JX (He, Junxian); Zhang, JH (Zhang, Jianghao); Liu, LY (Liu, Liyuan); Wu, NJ (Wu, Nanjian)
Source: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS Volume: 68 Issue: 5 Pages: 1581-1585 DOI: 10.1109/TCSII.2021.3063784 Published: MAY 2021
Abstract: Layer-by-layer error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high latency. To overcome these problems, we propose a method to efficiently and rapidly train deep SNNs, by extending the well-known single-layer Tempotron learning rule to multiple SNN layers under the Direct Feedback Alignment framework that directly projects output errors onto each hidden layer via a fixed random feedback matrix. A trace-based optimization for Tempotron learning is also proposed. Using such two techniques, our learning process becomes spatiotemporally local and is very plausible for neuromorphic hardware implementations. We applied the proposed hardware-friendly method in training multi-layer and deep SNNs, and obtained comparably high recognition accuracies on the MNIST and ETH-80 datasets.
Accession Number: WOS:000645863300005
ISSN: 1549-7747
eISSN: 1558-3791
Full Text: https://ieeexplore.ieee.org/document/9369402