Skip to main content
Log in

M-LSM: An Improved Multi-Liquid State Machine for Event-Based Vision Recognition

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Event-based computation has recently gained increasing research interest for applications of vision recognition due to its intrinsic advantages on efficiency and speed. However, the existing event-based models for vision recognition are faced with several issues, such as large network complexity and expensive training cost. In this paper, we propose an improved multi-liquid state machine (M-LSM) method for high-performance vision recognition. Specifically, we introduce two methods, namely multi-state fusion and multi-liquid search, to optimize the liquid state machine (LSM). Multistate fusion by sampling the liquid state at multiple timesteps could reserve richer spatiotemporal information. We adapt network architecture search (NAS) to find the potential optimal architecture of the multi-liquid state machine. We also train the M-LSM through an unsupervised learning rule spike-timing dependent plasticity (STDP). Our M-LSM is evaluated on two event-based datasets and demonstrates state-of-the-art recognition performance with superior advantages on network complexity and training cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Rathi N, Panda P, Roy K. STDP-based pruning of connections and weight quantization in spiking neural networks for energy-efficient recognition. IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, 2019, 38(4): 668–677. https://doi.org/10.1109/TCAD.2018.2819366.

    Article  Google Scholar 

  2. Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 1997, 10(9): 1659–1671. https://doi.org/10.1016/S0893-6080(97)00011-7.

    Article  Google Scholar 

  3. Lee C, Srinivasan G, Panda P, Roy K. Deep spiking convolutional neural network trained with unsupervised spike-timing-dependent plasticity. IEEE Trans. Cognitive and Developmental Systems, 2019, 11(3): 384–394. https://doi.org/10.1109/TCDS.2018.2833071.

    Article  Google Scholar 

  4. Querlioz D, Bichler O, Dollfus P, Gamrat C. Immunity to device variations in a spiking neural network with memristive nanodevices. IEEE Trans. Nanotechnology, 2013, 12(3): 288–295. https://doi.org/10.1109/TNANO.2013.2250995.

    Article  Google Scholar 

  5. Merolla P A, Arthur J V, Alvarez-Icaza R et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014, 345(6197): 668–673. https://doi.org/10.1126/science.1254642.

    Article  Google Scholar 

  6. Davies M, Srinivasa N, Lin T H et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 2018, 38(1): 82–99. https://doi.org/10.1109/MM.2018.112130359.

  7. Du Z D, Rubin D D B D, Chen Y J et al. Neuromorphic accelerators: A comparison between neuroscience and machine-learning approaches. In Proc. the 48th International Symposium on Microarchitecture, Dec. 2015, pp.494–507. https://doi.org/10.1145/2830772.2830789.

  8. Schuman C D, Potok T E, Patton R M et al. A survey of neuromorphic computing and neural networks in hardware. arXiv: 1705.06963, 2017. https://arxiv.org/abs/1705.06963, Dec. 2023.

  9. Amir A, Taba B, Berg D et al. A low power, fully event-based gesture recognition system. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.7388–7397. https://doi.org/10.1109/CVPR.2017.781.

  10. Gehrig D, Loquercio A, Derpanis K, Scaramuzza D. End-to-end learning of representations for asynchronous event-based data. In Proc. the 2019 IEEE/CVF International Conference on Computer Vision, Oct. 27–Nov. 2, 2019, pp.5632–5642. https://doi.org/10.1109/ICCV.2019.00573.

  11. Lichtsteiner P, Posch C, Delbruck T. A 128x128 120 db 15 μs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566–576. https://doi.org/10.1109/JSSC.2007.914337.

    Article  Google Scholar 

  12. Yang M H, Liu S C, Delbruck T. A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding. IEEE Journal of Solid-State Circuits, 2015, 50(9): 2149–2160. https://doi.org/10.1109/JSSC.2015.2425886.

    Article  Google Scholar 

  13. He W H, Wu Y J, Deng L et al. Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences. Neural Networks, 2020, 132: 108–120. https://doi.org/10.1016/j.neunet.2020.08.001.

    Article  Google Scholar 

  14. Shrestha S B, Orchard G. SLAYER: Spike layer error reassignment in time. In Proc. the 32nd International Con-ference on Neural Information Processing Systems, Dec. 2018, pp.1419–1428.

  15. Ju H, Xu J X, Chong E et al. Effects of synaptic connectivity on liquid state machine performance. Neural Networks, 2013, 38: 39–51. https://doi.org/10.1016/j.neunet.2012.11.003.

    Article  Google Scholar 

  16. Mi Y Y, Lin X H, Zou X L, Ji Z L, Huang T J, Wu S. Spatiotemporal information processing with a reservoir decision-making network. arXiv: 1907.12071, 2019. https://arxiv.org/abs/1907.12071, Dec. 2023.

  17. Kaiser J, Stal R, Subramoney A et al. Scaling up liquid state machines to predict over address events from dynamic vision sensors. Bioinspiration & Biomimetics, 2017, 12(5): 055001. https://doi.org/10.1088/1748-3190/aa7663.

  18. Wang Q, Li P. D-LSM: Deep liquid state machine with unsupervised recurrent reservoir tuning. In Proc. the 23rd International Conference on Pattern Recognition (ICPR), Dec. 2016, pp.2652–2657. https://doi.org/10.1109/ICPR.2016.7900035.

  19. Srinivasan G, Panda P, Roy K. SpilinC: Spiking liquid-ensemble computing for unsupervised speech and image recognition. Frontiers in Neuroscience, 2018, 12: 524. https://doi.org/10.3389/fnins.2018.00524.

    Article  Google Scholar 

  20. Orchard G, Jayawant A, Cohen G K, Thakor N. Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in Neuroscience, 2015, 9: 437. https://doi.org/10.3389/fnins.2015.00437.

    Article  Google Scholar 

  21. Goodman D F M, Brette R. The Brian simulator. Frontiers in Neuroscience, 2009, 3: 192–197. https://doi.org/10.3389/neuro.01.026.2009.

    Article  Google Scholar 

  22. Stimberg M, Brette R, Goodman D F M. Brian 2, an intuitive and efficient neural simulator. eLife, 2019, 8: e47314. https://doi.org/10.7554/eLife.47314.

  23. Wijesinghe P, Srinivasan G, Panda P, Roy K. Analysis of liquid ensembles for enhancing the performance and accuracy of liquid state machines. Frontiers in Neuroscience, 2019, 13: 504. https://doi.org/10.3389/fnins.2019.00504.

    Article  Google Scholar 

  24. Liu Q H, Ruan H B, Xing D, Tang H J, Pan G. Effective AER object classification using segmented probability-maximization learning in spiking neural networks. In Proc. the 34th AAAI Conference on Artificial Intelligence, Feb. 2020, pp.1308–1315. https://doi.org/10.1609/aaai.v34i02.5486.

  25. Reynolds J J M, Plank J S, Schuman C D. Intelligent reservoir generation for liquid state machines using evolutionary optimization. In Proc. the 2019 International Joint Conference on Neural Networks (IJCNN), Jul. 2019, pp.1–8. https://doi.org/10.1109/IJCNN.2019.8852472.

  26. Wu Y J, Deng L, Li G Q, Zhu J, Shi L P. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, 2018, 12: Article No. 331. https://doi.org/10.3389/fnins.2018.00331.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wang.

Supplementary Information

ESM 1

(PDF 149 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Guo, SS., Qu, LH. et al. M-LSM: An Improved Multi-Liquid State Machine for Event-Based Vision Recognition. J. Comput. Sci. Technol. 38, 1288–1299 (2023). https://doi.org/10.1007/s11390-021-1326-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-021-1326-8

Keywords

Navigation