skip to main content
survey
Free Access
Just Accepted

Neuromorphic Perception and Navigation for Mobile Robots: A Review

Online AM:09 April 2024Publication History
Skip Abstract Section

Abstract

With the fast and unstoppable evolution of robotics and artificial intelligence, effective autonomous navigation in real-world scenarios has become one of the most pressing challenges in the literature. However, demanding requirements, such as real-time operation, energy and computational efficiency, robustness, and reliability, make most current solutions unsuitable for real-world challenges. Thus, researchers are fostered to seek innovative approaches, such as bio-inspired solutions. Indeed, animals have the intrinsic ability to efficiently perceive, understand, and navigate their unstructured surroundings. To do so, they exploit self-motion cues, proprioception, and visual flow in a cognitive process to map their environment and locate themselves within it. Computational neuroscientists aim to answer “how” and “why” such cognitive processes occur in the brain, to design novel neuromorphic sensors and methods that imitate biological processing. This survey aims to comprehensively review the application of brain-inspired strategies to autonomous navigation. Considering neuromorphic perception and asynchronous event processing, energy-efficient and adaptive learning, or the imitation of the working principles of brain areas that play a crucial role in navigation such as the hippocampus or the entorhinal cortex.

References

  1. 1991. Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system. Neural Networks 4, 6 (1991), 759–771.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Himanshu Akolkar, SioHoi Ieng, and Ryad Benosman. 2018. Real-time high speed motion prediction using fast aperture-robust event-driven visual flow. https://doi.org/10.48550/ARXIV.1811.11135Google ScholarGoogle ScholarCross RefCross Ref
  3. Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, et al. 2015. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE T. on Computer-aided design of integrated circuits and systems 34, 10(2015), 1537–1557.Google ScholarGoogle Scholar
  4. Iñigo Alonso and Ana C. Murillo. 2018. EV-SegNet: Semantic Segmentation for Event-based Cameras. (2018). https://doi.org/10.48550/ARXIV.1811.12039Google ScholarGoogle ScholarCross RefCross Ref
  5. Scientific American. 2016. The Brain’s GPS Tells You Where You Are and Where You’ve Come from. https://www.scientificamerican.com/article/the-brain-s-gps-tells-you-where-you-are-and-where-you-ve-come-from/Google ScholarGoogle Scholar
  6. Stephen A. Baccus, Bence P. Ölveczky, Mihai Manu, and Markus Meister. 2008. A Retinal Circuit That Computes Object Motion. The Journal of Neuroscience 28 (2008), 6807 – 6817. https://api.semanticscholar.org/CorpusID:3470745Google ScholarGoogle ScholarCross RefCross Ref
  7. Simon Baker and Iain Matthews. 2004. Lucas-kanade 20 years on: A unifying framework. International journal of computer vision 56, 3 (2004), 221–255.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. David Ball, Scott Heath, Janet Wiles, Gordon Wyeth, Peter Corke, and Michael Milford. 2013. OpenRatSLAM: an open source brain-based SLAM system. Autonomous Robots 34(04 2013), 1–28. https://doi.org/10.1007/s10514-012-9317-9Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, and Dharshan Kumaran. 2018. Vector-based navigation using grid-like representations in artificial agents. Nature 557(05 2018). https://doi.org/10.1038/s41586-018-0102-6Google ScholarGoogle ScholarCross RefCross Ref
  10. Patrick Bardow, Andrew J. Davison, and Stefan Leutenegger. 2016. Simultaneous Optical Flow and Intensity Estimation from an Event Camera. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), 884–892.Google ScholarGoogle ScholarCross RefCross Ref
  11. Levick WR Barlow HB. 1965. The mechanism of directionally selective units in rabbit’s retina. The Journal of physiology 178, 3 (1965), 477–504. https://doi.org/doi: 10.1113/jphysiol.1965.sp007638Google ScholarGoogle ScholarCross RefCross Ref
  12. Francisco Barranco, Cornelia Fermüller, and Yiannis Aloimonos. 2014. Contour motion estimation for asynchronous event-driven cameras. Proc. IEEE 102, 10 (2014), 1537–1556.Google ScholarGoogle ScholarCross RefCross Ref
  13. Francisco Barranco, Cornelia Fermuller, and Yiannis Aloimonos. 2015. Bio-inspired motion estimation with event-driven sensors. In International Work-Conference on Artificial Neural Networks. Springer, 309–321.Google ScholarGoogle Scholar
  14. Francisco Barranco, Cornelia Fermüller, Yiannis Aloimonos, and Tobi Delbruck. 2016. A Dataset for Visual Navigation with Neuromorphic Methods. Frontiers in Neuroscience 10 (02 2016). https://doi.org/10.3389/fnins.2016.00049Google ScholarGoogle ScholarCross RefCross Ref
  15. Andrew J. Barry, Peter R. Florence, and Russ Tedrake. 2018. High-speed autonomous obstacle avoidance with pushbroom stereo. Journal of Field Robotics 35, 1 (2018), 52–68.Google ScholarGoogle ScholarCross RefCross Ref
  16. Caswell Barry, Colin Lever, R. Hayman, Tom Hartley, Stephen Burton, John O’Keefe, Kate Jeffery, and Neil Burgess. 2006. The boundary vector cell model of place cell firing and spatial memory. Reviews in the neurosciences 17 (02 2006), 71–97.Google ScholarGoogle Scholar
  17. Ryad Benosman, Charles Clercq, Xavier Lagorce, Sio-Hoi Ieng, and Chiara Bartolozzi. 2014. Event-Based Visual Flow. IEEE Transactions on Neural Networks and Learning Systems 25, 2(2014), 407–417. https://doi.org/10.1109/TNNLS.2013.2273537Google ScholarGoogle ScholarCross RefCross Ref
  18. Ryad Benosman, Sio-Hoi Ieng, Charles Clercq, Chiara Bartolozzi, and Mandyam Srinivasan. 2012. Asynchronous frameless event-based optical flow. Neural Networks 27(2012), 32–37. https://doi.org/10.1016/j.neunet.2011.11.001Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ryad Benosman, Sio-Hoï Sio-Hoï Ieng, Paul Rogister, and Christoph Posch. 2011. Asynchronous Event-Based Hebbian Epipolar Geometry. IEEE Transactions on Neural Networks 22, 11 (2011), 1723–1734.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Edgar Bermudez-Contreras, Benjamin J. Clark, and Aaron Wilber. 2020. The Neuroscience of Spatial Navigation and the Relationship to Artificial Intelligence. Frontiers in Computational Neuroscience 14 (2020).Google ScholarGoogle Scholar
  21. Jonathan Binas, Daniel Neil, Shih-Chii Liu, and Tobi Delbruck. 2017. DDD17: End-To-End DAVIS Driving Dataset. https://doi.org/10.48550/ARXIV.1711.01458Google ScholarGoogle ScholarCross RefCross Ref
  22. Tom Birkoben, Henning Winterfeld, Simon Fichtner, Adrian Petraru, and Hermann Kohlstedt. 2020. A spiking and adapting tactile sensor for neuromorphic applications. Scientific reports 10, 1 (2020), 1–11.Google ScholarGoogle Scholar
  23. Fabian Blöchliger, Marius Fehr, Marcin Dymczyk, Thomas Schneider, and Roland Y. Siegwart. 2018. Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps. IEEE Int. Conf. on Robotics and Automation (ICRA) (2018), 1–9.Google ScholarGoogle Scholar
  24. R. Bohlin and L.E. Kavraki. 2000. Path planning using lazy PRM. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), Vol.  1. 521–528 vol.1.Google ScholarGoogle ScholarCross RefCross Ref
  25. Christian Brandli, Raphael Berner, Minhao Yang, Shih-Chii Liu, and Tobi Delbruck. 2014. A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor. IEEE Journal of Solid-State Circuits 49, 10 (2014), 2333–2341.Google ScholarGoogle ScholarCross RefCross Ref
  26. Vincent Brebion, Julien Moreau, and Franck Davoine. 2021. Real-time optical flow for vehicular perception with low-and high-resolution event cameras. IEEE Transactions on Intelligent Transportation Systems 23, 9(2021), 15066–15078.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Levi Burner, Anton Mitrokhin, Cornelia Fermüller, and Yiannis Aloimonos. 2022. EVIMO2: An Event Camera Dataset for Motion Segmentation, Optical Flow, Structure from Motion, and Visual Inertial Odometry in Indoor Scenes with Monocular or Stereo Algorithms. arXiv (05 2022).Google ScholarGoogle Scholar
  28. Daniel Bush, Caswell Barry, Daniel Manson, and Neil Burgess. 2015. Using Grid Cells for Navigation. Neuron 87(2015).Google ScholarGoogle Scholar
  29. Luis Alejandro Camuñas-Mesa, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. 2014. Event-driven sensing and processing for high-speed robotic vision. In IEEE Biomedical Circuits and Systems Conference (BioCAS). 516–519.Google ScholarGoogle ScholarCross RefCross Ref
  30. Luísa Castro and Paulo Aguiar. 2014. A feedforward model for the formation of a grid field where spatial information is provided solely from place cells. Biological cybernetics 108 (02 2014). https://doi.org/10.1007/s00422-013-0581-3Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Andrea Censi and Davide Scaramuzza. 2014. Low-latency event-based visual odometry. In 2014 IEEE International Conference on Robotics and Automation (ICRA). 703–710. https://doi.org/10.1109/ICRA.2014.6906931Google ScholarGoogle ScholarCross RefCross Ref
  32. William Chamorro, Joan Solà, and Juan Andrade-Cetto. 2022. Event-Based Line SLAM in Real-Time. IEEE Robotics and Automation Letters 7, 3 (2022), 8146–8153. https://doi.org/10.1109/LRA.2022.3187266Google ScholarGoogle ScholarCross RefCross Ref
  33. Chin-Wei Chang, Wei-Ting Chen, and Ming-Tzu Ho. 2020. Implementation of Time-Optimal Motion Planning for SCARA Robots. In 2020 International Automatic Control Conference (CACS). 1–6. https://doi.org/10.1109/CACS50047.2020.9289744Google ScholarGoogle ScholarCross RefCross Ref
  34. Lijun Chao, Zhi Xiong, Jianye Liu, Chuang Yang, and Yudi Chen. 2021. A brain-inspired localization system for the UAV based on navigation cells. Aircraft Engineering and Aerospace Technology 93 (09 2021), 1221–1228.Google ScholarGoogle Scholar
  35. Qiuying Chen and Hongwei Mo. 2019. A Brain-Inspired Goal-Oriented Robot Navigation System. Applied Sciences 9, 22 (2019). https://doi.org/10.3390/app9224869Google ScholarGoogle ScholarCross RefCross Ref
  36. Yudi Chen, Zhi Xiong, Jianye Liu, Chuang Yang, Lijun Chao, and Yang Peng. 2021. A Positioning Method Based on Place Cells and Head-Direction Cells for Inertial/Visual Brain-Inspired Navigation System. Sensors 21(11 2021), 7988.Google ScholarGoogle Scholar
  37. Elisabetta Chicca, Michael Schmuker, and Martin P Nawrot. 2014. Neuromorphic Sensors, Olfaction.Google ScholarGoogle Scholar
  38. François Chollet. 2016. Xception: Deep Learning with Depthwise Separable Convolutions. https://arxiv.org/abs/1610.02357Google ScholarGoogle Scholar
  39. Fabien Colonnier, Luca Della Vedova, and Garrick Orchard. 2021. ESPEE: Event-Based Sensor Pose Estimation Using an Extended Kalman Filter. Sensors 21, 23 (2021). https://doi.org/10.3390/s21237840Google ScholarGoogle ScholarCross RefCross Ref
  40. Frontiers Science Communications. 2021. Infographic: How grid cells in the brain help us navigate the world. https://blog.frontiersin.org/2021/11/05/grid-cells-brain-navigate-may-britt-moser/Google ScholarGoogle Scholar
  41. N Correll, B Hayes, C Heckman, and A Roncone. 2022. Introduction to Autonomous Robots: Mechanisms. Sensors, Actuators, and Algorithms. MIT Press, Cambridge, MA, (2022).Google ScholarGoogle Scholar
  42. Michael Csorba, Jeffrey K Uhlmann, and Hugh F Durrant-Whyte. 1996. New approach to simultaneous localization and dynamic map building. In Navigation and Control Technologies for Unmanned Systems, Vol.  2738. SPIE, 26–36.Google ScholarGoogle Scholar
  43. Manon Dampfhoffer, Thomas Mesquida, Alexandre Valentian, and Lorena Anghel. 2023. Are SNNs Really More Energy-Efficient Than ANNs? an In-Depth Hardware-Aware Study. IEEE Transactions on Emerging Topics in Computational Intelligence 7, 3(2023), 731–741. https://doi.org/10.1109/TETCI.2022.3214509Google ScholarGoogle ScholarCross RefCross Ref
  44. Simon Davidson and Steve B Furber. 2021. Comparison of artificial and spiking neural networks on digital hardware. Frontiers in Neuroscience 15 (2021), 651141.Google ScholarGoogle ScholarCross RefCross Ref
  45. Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro 38, 1 (2018), 82–99.Google ScholarGoogle ScholarCross RefCross Ref
  46. Daniel Deniz, Eduardo Ros, Cornelia Fermuller, and Francisco Barranco. 2023. When do neuromorphic sensors outperform cameras? Learning from dynamic features. In 57th Annual Conference on Information Sciences and Systems (CISS).Google ScholarGoogle ScholarCross RefCross Ref
  47. Peter U Diehl and Matthew Cook. 2015. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in computational neuroscience 9 (2015), 99.Google ScholarGoogle Scholar
  48. Peter U Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. 2015. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International joint conference on neural networks (IJCNN). ieee, 1–8.Google ScholarGoogle Scholar
  49. Weiheng Liu Dongqing Zou, Feng Shi and Hyunsurk Eric Ryu. 2017. Robust Dense Depth Maps Generations from Sparse DVS Stereos. In British Machine Vision Conference (BMVC). Article 39, 11 pages.Google ScholarGoogle Scholar
  50. Marco Dorigo, Guy Theraulaz, and Vito Trianni. 2021. Swarm Robotics: Past, Present, and Future [Point of View]. Proceedings of the IEEE 109, 7 (July 2021), 1152–1165. https://doi.org/10.1109/JPROC.2021.3072740Google ScholarGoogle ScholarCross RefCross Ref
  51. Vegard Edvardsen. 2015. A Passive Mechanism for Goal-Directed Navigation using Grid Cells(ALIFE 2022: The 2022 Conference on Artificial Life, Vol.  ECAL 2015: the 13th European Conference on Artificial Life). 191–198.Google ScholarGoogle Scholar
  52. Vegard Edvardsen. 2017. Long-range navigation by path integration and decoding of grid cells in a neural network. In 2017 International Joint Conference on Neural Networks (IJCNN). 4348–4355. https://doi.org/10.1109/IJCNN.2017.7966406Google ScholarGoogle ScholarCross RefCross Ref
  53. Vegard Edvardsen. 2019. Goal-Directed Navigation Based on Path Integration and Decoding of Grid Cells in an Artificial Neural Network. Natural Computing: An International Journal 18, 1 (mar 2019), 13–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Vegard Edvardsen, Andrej Bicanski, and Neil Burgess. 2020. Navigating with grid and place cells in cluttered environments. Hippocampus 30, 3 (2020), 220–232.Google ScholarGoogle ScholarCross RefCross Ref
  55. Davide Falanga, Suseong Kim, and Davide Scaramuzza. 2019. How Fast Is Too Fast? The Role of Perception Latency in High-Speed Sense and Avoid. IEEE Robotics and Automation Letters 4, 2 (2019), 1884–1891.Google ScholarGoogle ScholarCross RefCross Ref
  56. Davide Falanga, Kevin Kleber, and Davide Scaramuzza. 2020. Dynamic obstacle avoidance for quadrotors with event cameras. Science Robotics 5, 40 (2020), eaaz9712.Google ScholarGoogle Scholar
  57. Thomas Finateu, Atsumi Niwa, Daniel Matolin, Koya Tsuchimoto, Andrea Mascheroni, Etienne Reynaud, Pooria Mostafalu, Frederick T. Brady, Ludovic Chotard, Florian Le Goff, Hirotsugu Takahashi, Hayato Wakabayashi, Yusuke Oike, and Christoph Posch. 2020. 5.10 A 1280×720 Back-Illuminated Stacked Temporal Contrast Event-Based Vision Sensor with 4.86µm Pixels, 1.066GEPS Readout, Programmable Event-Rate Controller and Compressive Data-Formatting Pipeline. 2020 IEEE International Solid- State Circuits Conference - (ISSCC) (2020), 112–114.Google ScholarGoogle Scholar
  58. Tobias Fischer and Michael Milford. 2022. How Many Events Do You Need? Event-Based Visual Place Recognition Using Sparse But Varying Pixels. IEEE Robotics and Automation Letters 7, 4 (2022), 12275–12282.Google ScholarGoogle ScholarCross RefCross Ref
  59. Philipp Foehn, Angel Romero, and Davide Scaramuzza. 2021. Time-optimal planning for quadrotor waypoint flight. Science Robotics 6, 56 (2021), eabh1221.Google ScholarGoogle Scholar
  60. Christian Forster, Matia Pizzoli, and Davide Scaramuzza. 2014. SVO: Fast semi-direct monocular visual odometry. In 2014 IEEE International Conference on Robotics and Automation (ICRA). 15–22. https://doi.org/10.1109/ICRA.2014.6906584Google ScholarGoogle ScholarCross RefCross Ref
  61. Justas Furmonas, John Liobe, and Vaidotas Barzdenas. 2022. Analytical Review of Event-Based Camera Depth Estimation Methods and Systems. Sensors 22, 3 (2022). https://doi.org/10.3390/s22031201Google ScholarGoogle ScholarCross RefCross Ref
  62. Guillermo Gallego, Tobi Delbrück, G. Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J. Davison, Jörg Conradt, Kostas Daniilidis, and Davide Scaramuzza. 2022. Event-Based Vision: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (2022), 154–180.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Guillermo Gallego, Mathias Gehrig, and Davide Scaramuzza. 2019. Focus Is All You Need: Loss Functions for Event-Based Vision. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), 12272–12281.Google ScholarGoogle Scholar
  64. Guillermo Gallego, Jon Lund, Elias Mueggler, Henri Rebecq, Tobi Delbruck, and Davide Scaramuzza. 2018. Event-based, 6-DOF camera tracking from photometric depth maps. IEEE T. on Pat. Anal. and Mach. Intel. 40, 10 (2018), 2402–2412.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Guillermo Gallego, Henri Rebecq, and Davide Scaramuzza. 2018. A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3867–3876. https://doi.org/10.1109/CVPR.2018.00407Google ScholarGoogle ScholarCross RefCross Ref
  66. Ling Gao, Yuxuan Liang, Jiaqi Yang, Shaoxun Wu, Chenyu Wang, Jiaben Chen, and Laurent Kneip. 2022. VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM. IEEE Robotics and Automation Letters 7, 3 (2022), 8217–8224.Google ScholarGoogle ScholarCross RefCross Ref
  67. Philippe Gaussier, Jean Paul Banquet, Nicolas Cuperlier, Mathias Quoy, Lise Aubin, Pierre-Yves Jacob, Francesca Sargolini, Etienne Save, Jeffrey L. Krichmar, Bruno Poucet, Basil el Jundi, Almut Kelber, and Barbara Webb. 2019. Merging information in the entorhinal cortex: what can we learn from robotics experiments and modeling?Journal of Experimental Biology 222 (02 2019). https://doi.org/10.1242/jeb.186932Google ScholarGoogle ScholarCross RefCross Ref
  68. Simon Gay, Kévin Le Run, Edwige Pissaloux, Katerine Romeo, and Christèle Lecomte. 2021. Towards a Predictive Bio-Inspired Navigation Model. Information 12, 3 (2021). https://doi.org/10.3390/info12030100Google ScholarGoogle ScholarCross RefCross Ref
  69. Simon Gay, Edwige Pissaloux, and Jean-Paul Jamont. 2023. A Bio-Inspired Model for Robust Navigation Assistive Devices: A Proof of Concept. 17–33. https://doi.org/10.1007/978-3-031-29548-5_2Google ScholarGoogle ScholarCross RefCross Ref
  70. Daniel Gehrig, Michelle Rüegg, Mathias Gehrig, Javier Hidalgo-Carrió, and Davide Scaramuzza. 2021. Combining Events and Frames Using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction. IEEE Robotics and Automation Letters 6, 2 (2021), 2822–2829. https://doi.org/10.1109/LRA.2021.3060707Google ScholarGoogle ScholarCross RefCross Ref
  71. Mathias Gehrig, Willem Aarents, Daniel Gehrig, and Davide Scaramuzza. 2021. DSEC: A Stereo Event Camera Dataset for Driving Scenarios. IEEE Robotics and Automation Letters(2021). https://doi.org/10.1109/LRA.2021.3068942Google ScholarGoogle ScholarCross RefCross Ref
  72. Mathias Gehrig, Mario Millhäusler, Daniel Gehrig, and Davide Scaramuzza. 2021. E-RAFT: Dense Optical Flow from Event Cameras. In 2021 International Conference on 3D Vision (3DV). 197–206. https://doi.org/10.1109/3DV53792.2021.00030Google ScholarGoogle ScholarCross RefCross Ref
  73. Caroline Geisler, David Robbe, Michaël Zugaro, Anton Sirota, and György Buzsáki. 2007. Hippocampal place cell assemblies are speed-controlled oscillators. Proceedings of the National Academy of Sciences 104, 19(2007), 8149–8154.Google ScholarGoogle ScholarCross RefCross Ref
  74. Klara Gerlei, Jessica Passlack, Ian Hawes, Brianna Vandrey, Holly Stevens, Ioannis Papastathopoulos, and Matthew F Nolan. 2020. Grid cells are modulated by local head direction. Nature communications 11, 1 (2020), 1–14.Google ScholarGoogle Scholar
  75. Suman Ghosh and Guillermo Gallego. 2022. Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion. Advanced Intelligent Systems(2022), 2200221.Google ScholarGoogle Scholar
  76. Roddy M. Grieves and Kate J. Jeffery. 2017. The representation of space in the brain. Behavioural Processes 135 (2017), 113–131. https://doi.org/10.1016/j.beproc.2016.12.012Google ScholarGoogle ScholarCross RefCross Ref
  77. Antea Hadviger, Igor Cvišić, Ivan Marković, Sacha Vražić, and Ivan Petrović. 2021. Feature-based Event Stereo Visual Odometry. In 2021 European Conference on Mobile Robots (ECMR). 1–6. https://doi.org/10.1109/ECMR50962.2021.9568811Google ScholarGoogle ScholarCross RefCross Ref
  78. Germain Haessig, Xavier Berthelon, Sio-Hoi Ieng, and Ryad Benosman. 2019. A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision. Scientific Reports 9(03 2019), 3744.Google ScholarGoogle Scholar
  79. Germain Haessig, Andrew Cassidy, Rodrigo Alvarez, Ryad Benosman, and Garrick Orchard. 2018. Spiking Optical Flow for Event-Based Sensors Using IBM’s TrueNorth Neurosynaptic System. IEEE TBioCAS 12, 4 (2018), 860–870.Google ScholarGoogle Scholar
  80. Jesse Hagenaars, Federico Paredes-Vallés, and Guido De Croon. 2021. Self-supervised learning of event-based optical flow with spiking neural networks. Advances in Neural Information Processing Systems 34 (2021), 7167–7179.Google ScholarGoogle Scholar
  81. Kun Han, Dewei Wu, and Lei Lai. 2020. A Brain-Inspired Adaptive Space Representation Model Based on Grid Cells and Place Cells. Computational Intelligence and Neuroscience 2020 (08 2020), 1–12. https://doi.org/10.1155/2020/1492429Google ScholarGoogle ScholarCross RefCross Ref
  82. Richard Hartley and Andrew Zisserman. 2004. Multiple View Geometry in Computer Vision. Cambridge University Press.Google ScholarGoogle Scholar
  83. Tom Hartley, N. Burgess, C. Lever, F. Cacucci, and J. O’Keefe. 2000. Modeling place fields in terms of the cortical inputs to the hippocampus. Hippocampus 10, 4 (2000), 369–379.Google ScholarGoogle ScholarCross RefCross Ref
  84. Alfa Heryudono, Elisabeth Larsson, Alison Ramage, and Lina Sydow. 2016. Preconditioning for Radial Basis Function Partition of Unity Methods. J. Sci. Comput. 67, 3 (jun 2016), 1089–1109. https://doi.org/10.1007/s10915-015-0120-6Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Javier Hidalgo-Carrió, Daniel Gehrig, and Davide Scaramuzza. 2020. Learning monocular dense depth from events. In 2020 International Conference on 3D Vision (3DV). IEEE, 534–542.Google ScholarGoogle ScholarCross RefCross Ref
  86. Berthold K.P. Horn and Brian G. Schunck. 1981. Determining optical flow. Artificial Intelligence 17, 1 (1981), 185–203.Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Armin Hornung, Kai M. Wurm, Maren Bennewitz, Cyrill Stachniss, and Wolfram Burgard. 2013. OctoMap: An efficient probabilistic mapping framework based on octrees. Autonomous Robots (2013). https://octomap.github.io.Google ScholarGoogle Scholar
  88. Junjie Hu, Yan Zhang, and Takayuki Okatani. 2019. Visualization of Convolutional Neural Networks for Monocular Depth Estimation. 3868–3877. https://doi.org/10.1109/ICCV.2019.00397Google ScholarGoogle ScholarCross RefCross Ref
  89. Yuhuang Hu, Shih-Chii Liu, and Tobi Delbruck. 2021. v2e: From video frames to realistic DVS events. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1312–1321.Google ScholarGoogle ScholarCross RefCross Ref
  90. Kate J. Jeffery. 2015. Spatial Cognition: Entorhinal Cortex and the Hippocampal Place-Cell Map. Current Biology 25, 24 (2015), R1181–R1183. https://doi.org/10.1016/j.cub.2015.10.048Google ScholarGoogle ScholarCross RefCross Ref
  91. Hyun Ho Jeon and Yun-Ho Ko. 2018. LiDAR data interpolation algorithm for visual odometry based on 3D-2D motion estimation. In 2018 International Conference on Electronics, Information, and Communication (ICEIC). 1–2.Google ScholarGoogle ScholarCross RefCross Ref
  92. Yuqian Jiang, Shiqi Zhang, Piyush Khandelwal, and Peter Stone. 2018. Task Planning in Robotics: an Empirical Comparison of PDDL-based and ASP-based Systems. (2018). https://doi.org/10.48550/ARXIV.1804.08229Google ScholarGoogle ScholarCross RefCross Ref
  93. Damien Joubert, Alexandre Marcireau, Nic Ralph, Andrew Jolley, André van Schaik, and Gregory Cohen. 2021. Event Camera Simulator Improvements via Characterized Parameters. Frontiers in Neuroscience 15 (2021).Google ScholarGoogle Scholar
  94. Eric R Kandel, James H Schwartz, Thomas M Jessell, Steven Siegelbaum, A James Hudspeth, Sarah Mack, et al. 2000. Principles of neural science. Vol.  4. McGraw-hill New York.Google ScholarGoogle Scholar
  95. Sertac Karaman and Emilio Frazzoli. 2011. Sampling-based Algorithms for Optimal Motion Planning. International Journal of Robotic Research - IJRR 30 (06 2011), 846–894. https://doi.org/10.1177/0278364911406761Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Karthik Karur, Nitin Sharma, Chinmay Dharmatti, and Joshua E. Siegel. 2021. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 3, 3 (2021), 448–468. https://doi.org/10.3390/vehicles3030027Google ScholarGoogle ScholarCross RefCross Ref
  97. Sourabh Katoch, Sumit Singh Chauhan, and Vijay Kumar. 2021. A Review on Genetic Algorithm: Past, Present, and Future. Multimedia Tools and Applications 80, 5 (feb 2021), 8091–8126. https://doi.org/10.1007/s11042-020-10139-6Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. L.E. Kavraki, P. Svestka, J.-C. Latombe, and M.H. Overmars. 1996. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Transactions on Robotics and Automation 12, 4 (Aug 1996), 566–580.Google ScholarGoogle ScholarCross RefCross Ref
  99. Hanme Kim, Stefan Leutenegger, and Andrew J. Davison. 2016. Real-time 3D reconstruction and 6-DoF tracking with an event camera. Lecture Notes in Computer Science 9910 LNCS (2016), 349 – 364.Google ScholarGoogle ScholarCross RefCross Ref
  100. Simon Klenk, Jason Chui, Nikolaus Demmel, and Daniel Cremers. 2021. Tum-vie: The tum stereo visual-inertial event dataset. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 8601–8608.Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. James J. Knierim and Kechen Zhang. 2012. Attractor Dynamics of Spatially Correlated Neural Activity in the Limbic System. Annual Review of Neuroscience 35, 1 (2012), 267–285.Google ScholarGoogle ScholarCross RefCross Ref
  102. Julie Koenig, Ashley N Linder, Jill K Leutgeb, and Stefan Leutgeb. 2011. The spatial periodicity of grid cells is not sustained during reduced theta oscillations. Science 332, 6029 (2011), 592–595.Google ScholarGoogle ScholarCross RefCross Ref
  103. Adarsh Kosta and Kaushik Roy. 2022. Adaptive-SpikeNet: Event-based Optical Flow Estimation using Spiking Neural Networks with Learnable Neuronal Dynamics. arXiv (09 2022). https://doi.org/10.48550/arXiv.2209.11741Google ScholarGoogle ScholarCross RefCross Ref
  104. Emilio Kropff and Alessandro Treves. 2008. The emergence of grid cells: Intelligent design or just adaptation?Hippocampus 18, 12 (2008), 1256–1269. https://doi.org/10.1002/hipo.20520 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/hipo.20520Google ScholarGoogle ScholarCross RefCross Ref
  105. Tomas Kulvicius, Minija Tamosiunaite, James Ainge, Paul Dudchenko, and Florentin Wörgötter. 2008. Odor supported place cell model and goal navigation in rodents. Journal of computational neuroscience 25, 3 (2008), 481–500.Google ScholarGoogle ScholarCross RefCross Ref
  106. Xavier Lagorce, Garrick Orchard, Francesco Galluppi, Bertram Shi, and Ryad Benosman. 2017. HOTS: A hierarchy of event-based time-surfaces for pattern recognition. IEEE T on Patt. Analysis and Mach. Intell. 39, 7 (2017), 1346–1359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. Steven M. LaValle. 1998. Rapidly-exploring random trees : a new tool for path planning. The annual research report(1998).Google ScholarGoogle Scholar
  108. Chankyu Lee, Adarsh Kosta, Alex Zihao Zhu, Kenneth Chaney, Kostas Daniilidis, and Kaushik Roy. 2020. Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks. In ECCV. arXiv:2003.06696.Google ScholarGoogle Scholar
  109. Chankyu Lee, Adarsh Kumar Kosta, and Kaushik Roy. 2021. Fusion-FlowNet: Energy-Efficient Optical Flow Estimation using Sensor Fusion and Deep Fused Spiking-Analog Network Architectures. ArXiv, 6504–6510.Google ScholarGoogle Scholar
  110. Colin Lever, Stephen Burton, Ali Jeewajee, John O’Keefe, and Neil Burgess. 2009. Boundary Vector Cells in the Subiculum of the Hippocampal Formation. Journal of Neuroscience 29, 31 (2009), 9771–9777.Google ScholarGoogle ScholarCross RefCross Ref
  111. Weilong Li, Dewei Wu, Yang Zhou, and Jia Du. 2017. A bio-inspired method of autonomous positioning using spatial association based on place cells firing. International Journal of Advanced Robotic Systems 14 (09 2017), 172988141772801.Google ScholarGoogle ScholarCross RefCross Ref
  112. Patrick Lichtsteiner, Christoph Posch, and Tobi Delbruck. 2008. A 128× 128 120 dB 15 µs Latency Asynchronous Temporal Contrast Vision Sensor. Solid-State Circuits, IEEE Journal of 43 (03 2008), 566 – 576.Google ScholarGoogle Scholar
  113. Xiuhong Lin, Chenhui Yang, Xuesheng Bian, Weiquan Liu, and Cheng Wang. 2022. EAGAN: Event‐based attention generative adversarial networks for optical flow and depth estimation. IET Computer Vision (06 2022), n/a–n/a.Google ScholarGoogle Scholar
  114. Siyu Liu and Bo Ma. 2021. Stereo Visual Odometry with Information Enhancement at Feature Points. In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE). 654–661.Google ScholarGoogle ScholarCross RefCross Ref
  115. Shih-Chii Liu, Tobi Delbruck, Giacomo Indiveri, Adrian Whatley, and Rodney Douglas. 2014. Event-based neuromorphic systems. John Wiley & Sons.Google ScholarGoogle Scholar
  116. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. https://doi.org/10.48550/ARXIV.2103.14030Google ScholarGoogle ScholarCross RefCross Ref
  117. Bruce D. Lucas and Takeo Kanade. 1981. An Iterative Image Registration Technique with an Application to Stereo Vision. In Int. Joint Conference on Artificial Intelligence (IJCAI’81). 674–679.Google ScholarGoogle Scholar
  118. Ren C. Luo and Wei Shih. 2018. Autonomous Mobile Robot Intrinsic Navigation Based on Visual Topological Map. In 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE). 541–546. https://doi.org/10.1109/ISIE.2018.8433588Google ScholarGoogle ScholarCross RefCross Ref
  119. Camille Mazzara, Albert Comelli, and Michele Migliore. 2022. Place Cell’s Computational Model. In Image Analysis and Processing. ICIAP 2022 Workshops. 398–407.Google ScholarGoogle Scholar
  120. Etienne Meunier, Anaïs Badoual, and Patrick Bouthemy. 2022. EM-Driven Unsupervised Learning for Efficient Motion Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022), 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. Chenglin Miao, Qichen Cao, Hiroshi T. Ito, Homare Yamahachi, Menno P. Witter, May-Britt Moser, and Edvard I. Moser. 2015. Hippocampal Remapping after Partial Inactivation of the Medial Entorhinal Cortex. Neuron 88, 3 (2015), 590–603.Google ScholarGoogle ScholarCross RefCross Ref
  122. Michael Milford and Ashley George. 2014. Featureless Visual Processing for SLAM in Changing Outdoor Environments. Vol.  92. 569–583. https://doi.org/10.1007/978-3-642-40686-7_38Google ScholarGoogle ScholarCross RefCross Ref
  123. Michael Milford, Hanme Kim, Stefan Leutenegger, and Andrew J. Davison. 2015. Towards Visual SLAM with Event-based Cameras.Google ScholarGoogle Scholar
  124. Michael Milford and Gordon Wyeth. 2003. Hippocampal models for simultaneous localisation and mapping on an autonomous robot.Google ScholarGoogle Scholar
  125. Michael Milford and Gordon Wyeth. 2010. Persistent Navigation and Mapping using a Biologically Inspired SLAM System. The International Journal of Robotics Research 29 (08 2010). https://doi.org/10.1177/0278364909340592Google ScholarGoogle ScholarDigital LibraryDigital Library
  126. Michael J. Milford and Gordon F. Wyeth. 2008. Mapping a Suburb With a Single Camera Using a Biologically Inspired SLAM System. IEEE Transactions on Robotics 24, 5 (2008), 1038–1053. https://doi.org/10.1109/TRO.2008.2004520Google ScholarGoogle ScholarDigital LibraryDigital Library
  127. Michael J. Milford and Gordon. F. Wyeth. 2012. SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. In 2012 IEEE International Conference on Robotics and Automation. 1643–1649.Google ScholarGoogle Scholar
  128. Anton Mitrokhin, Cornelia Fermuller, Chethan Parameshwara, and Yiannis Aloimonos. 2018. Event-Based Moving Object Detection and Tracking. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE.Google ScholarGoogle Scholar
  129. Anton Mitrokhin, Zhiyuan Hua, Cornelia Fermüller, and Yiannis Aloimonos. 2020. Learning Visual Motion Segmentation Using Event Surfaces. 14402–14411. https://doi.org/10.1109/CVPR42600.2020.01442Google ScholarGoogle ScholarCross RefCross Ref
  130. Anton Mitrokhin, Chengxi Ye, Cornelia Fermüller, Yiannis Aloimonos, and Tobi Delbruck. 2019. EV-IMO: Motion segmentation dataset and learning pipeline for event cameras. In Conf. on Intelligent Robots and Systems (IROS). 6105–6112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. Edvard I Moser and May-Britt Moser. 2008. A metric for space. Hippocampus 18, 12 (2008), 1142–1156.Google ScholarGoogle ScholarCross RefCross Ref
  132. Edvard I Moser, May-Britt Moser, and Bruce L McNaughton. 2017. Spatial representation in the hippocampal formation: a history. Nature neuroscience 20, 11 (2017), 1448–1464.Google ScholarGoogle Scholar
  133. May-Britt Moser, David C Rowland, and Edvard I Moser. 2015. Place cells, grid cells, and memory. Cold Spring Harbor perspectives in biology 7, 2 (2015), a021808.Google ScholarGoogle Scholar
  134. Elias Mueggler, Basil Huber, and Davide Scaramuzza. 2014. Event-based, 6-DOF pose tracking for high-speed maneuvers. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2761–2768.Google ScholarGoogle ScholarCross RefCross Ref
  135. Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing 36, 6 (2019), 51–63.Google ScholarGoogle ScholarCross RefCross Ref
  136. Anh Nguyen, Thanh-Toan Do, and Nikos Caldwell, Darwinand Tsagarakis. 2019. Real-time 6DOF pose relocalization for event cameras with stacked spatial LSTM networks. In CVPRW. 1638–1645.Google ScholarGoogle Scholar
  137. Xiaoqi Nong and Simon Hadfield. 2022. ASL-SLAM: An Asynchronous Formulation of Lines for SLAM with Event Sensors. In 2022 Int. Conf. on Industrial Engineering and Applications. 84–91.Google ScholarGoogle Scholar
  138. A. Novo, Liang Lu, and Pascual Campoy. 2022. FAST RRT* 3D-Sliced Planner for Autonomous Exploration Using MAVs. Unmanned Systems 10, 02 (2022), 175–186.Google ScholarGoogle ScholarCross RefCross Ref
  139. Helen Oleynikova, Zachary Taylor, Marius Fehr, Roland Siegwart, and Juan Nieto. 2017. Voxblox: Incremental 3D euclidean signed distance fields for on-board MAV planning. In 2017 Int. Conf. on Intelligent Robots and Systems (IROS).Google ScholarGoogle ScholarDigital LibraryDigital Library
  140. Chethan Parameshwara, Nitin J. Sanket, Arjun Gupta, Cornelia Fermuller, and Yiannis Aloimonos. 2020. MOMS with Events: Multi-Object Motion Segmentation With Monocular Event Cameras. ArXiv abs/2006.06158(2020).Google ScholarGoogle Scholar
  141. Chethan M. Parameshwara, Simin Li, Cornelia Fermüller, Nitin J. Sanket, Matthew S. Evanusa, and Yiannis Aloimonos. 2021. SpikeMS: Deep Spiking Neural Network for Motion Segmentation. https://doi.org/10.48550/ARXIV.2105.06562Google ScholarGoogle ScholarCross RefCross Ref
  142. Chethan M Parameshwara, Nitin J Sanket, Chahat Deep Singh, Cornelia Fermüller, and Yiannis Aloimonos. 2021. 0-mms: Zero-shot multi-motion segmentation with a monocular event camera. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 9594–9600.Google ScholarGoogle ScholarDigital LibraryDigital Library
  143. Federico Paredes-Vallés and Guido CHE de Croon. 2021. Back to event basics: Self-supervised learning of image reconstruction for event cameras via photometric constancy. In Conf. on Computer Vision and Pattern Recognition. 3446–3455.Google ScholarGoogle ScholarCross RefCross Ref
  144. Scott Drew Pendleton, Hans Andersen, Xinxin Du, Xiaotong Shen, Malika Meghjani, You Hong Eng, Daniela Rus, and Marcelo H. Ang. 2017. Perception, Planning, Control, and Coordination for Autonomous Vehicles. Machines 5, 1 (2017).Google ScholarGoogle Scholar
  145. E Piatkowska, A N Belbachir, and M Gelautz. 2014. Cooperative and asynchronous stereo vision for dynamic vision sensors. Measurement Science and Technology 25, 5 (apr 2014), 055108. https://doi.org/10.1088/0957-0233/25/5/055108Google ScholarGoogle ScholarCross RefCross Ref
  146. Albert Pumarola, Alexander Vakhitov, Antonio Agudo, Alberto Sanfeliu, and Francese Moreno-Noguer. 2017. PL-SLAM: Real-time monocular visual SLAM with points and lines. In 2017 Int. Conf. on Robotics and Automation (ICRA). 4503–4508.Google ScholarGoogle ScholarDigital LibraryDigital Library
  147. Ulysse Rançon, Javier Cuadrado-Anibarro, Benoit R. Cottereau, and Timothée Masquelier. 2022. StereoSpike: Depth Learning With a Spiking Neural Network. IEEE Access 10(2022), 127428–127439. https://doi.org/10.1109/ACCESS.2022.3226484Google ScholarGoogle ScholarCross RefCross Ref
  148. Henri Rebecq, Guillermo Gallego, Elias Mueggler, and Davide Scaramuzza. 2018. EMVS: Event-Based Multi-View Stereo–3D Reconstruction with an Event Camera in Real-Time. Int. J. Comput. Vision 126, 12 (dec 2018), 1394–1414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. Henri Rebecq, Daniel Gehrig, and Davide Scaramuzza. 2018. ESIM: an open event camera simulator. In Conference on robot learning. PMLR, 969–982.Google ScholarGoogle Scholar
  150. Henri Rebecq, Timo Horstschaefer, Guillermo Gallego, and Davide Scaramuzza. 2017. EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real time. IEEE Rob. and Automation Letters 2, 2 (2017), 593–600.Google ScholarGoogle ScholarCross RefCross Ref
  151. Henri Rebecq, Timo Horstschaefer, and Davide Scaramuzza. 2017. Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization. In British Machine Vision Conference. 1–8.Google ScholarGoogle ScholarCross RefCross Ref
  152. Nicoletta Risi, Enrico Calabrese, and Giacomo Indiveri. 2021. Instantaneous Stereo Depth Estimation of Real-World Stimuli with a Neuromorphic Stereo-Vision Setup. In 2021 IEEE International Symposium on Circuits and Systems (ISCAS). 1–5.Google ScholarGoogle ScholarCross RefCross Ref
  153. Robin Ritz, Mark W. Müller, Markus Hehn, and Raffaello D’Andrea. 2012. Cooperative quadrocopter ball throwing and catching. In 2012 IEEE/RSJ Int. Conference on Intelligent Robots and Systems. 4972–4978.Google ScholarGoogle ScholarCross RefCross Ref
  154. Paul Rogister, Ryad Benosman, Sio-Hoi Ieng, Patrick Lichtsteiner, and Tobi Delbruck. 2012. Asynchronous Event-Based Binocular Stereo Matching. IEEE Transactions on Neural Networks and Learning Systems 23, 2(2012), 347–353.Google ScholarGoogle ScholarCross RefCross Ref
  155. Angel Romero, Robert Penicka, and Davide Scaramuzza. 2022. Time-Optimal Online Replanning for Agile Quadrotor Flight. IEEE Robotics and Automation Letters 7, 3 (2022), 7730–7737. https://doi.org/10.1109/lra.2022.3185772Google ScholarGoogle ScholarCross RefCross Ref
  156. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. LNCS 9351, 234–241. https://doi.org/10.1007/978-3-319-24574-4_28Google ScholarGoogle ScholarCross RefCross Ref
  157. Antoni Rosinol Vidal, Henri Rebecq, Timo Horstschaefer, and Davide Scaramuzza. 2017. Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios. (Sept. 2017). arXiv:1709.06310Google ScholarGoogle Scholar
  158. Edward Rosten, Reid B. Porter, and Tom Drummond. 2010. Faster and Better: A Machine Learning Approach to Corner Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (2010), 105–119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  159. Bodo Rueckauer and Tobi Delbruck. 2016. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor. Frontiers in Neuroscience 10 (2016). https://doi.org/10.3389/fnins.2016.00176Google ScholarGoogle ScholarCross RefCross Ref
  160. Talabattula Sai Abhishek, Daniel Schilberg, and Arockia Arockia Doss. 2021. Obstacle Avoidance Algorithms: A Review. IOP Conference Series: Materials Science and Engineering 1012 (01 2021), 012052.Google ScholarGoogle Scholar
  161. Alexei Samsonovich and Bruce L. McNaughton. 1997. Path Integration and Cognitive Mapping in a Continuous Attractor Neural Network Model. Journal of Neuroscience 17, 15 (1997), 5900–5920.Google ScholarGoogle ScholarCross RefCross Ref
  162. Nitin J. Sanket, Chethan M. Parameshwara, Chahat Deep Singh, Ashwin V. Kuruttukulam, Cornelia Fermüller, Davide Scaramuzza, and Yiannis Aloimonos. 2019. EVDodgeNet: Deep Dynamic Obstacle Dodging with Event Cameras. (2019).Google ScholarGoogle Scholar
  163. Cedric Scheerlinck, Henri Rebecq, Daniel Gehrig, Nick Barnes, Robert E. Mahony, and Davide Scaramuzza. 2020. Fast Image Reconstruction with an Event Camera. In IEEE Winter Conf. on App. of Computer Vision (WACV). 156–163.Google ScholarGoogle ScholarCross RefCross Ref
  164. Catherine Schuman, Shruti Kulkarni, Maryam Parsa, J. Mitchell, Prasanna Date, and Bill Kay. 2022. Opportunities for neuromorphic computing algorithms and applications. Nature Computational Science 2 (01 2022), 10–19.Google ScholarGoogle Scholar
  165. Shikai Shao, Chenglong He, Yuanjie Zhao, and Xiaojing Wu. 2021. Efficient Trajectory Planning for UAVs Using Hierarchical Optimization. IEEE Access 9(2021), 60668–60681. https://doi.org/10.1109/ACCESS.2021.3073420Google ScholarGoogle ScholarCross RefCross Ref
  166. Shintaro Shiba, Yoshimitsu Aoki, and Guillermo Gallego. 2022. Secrets of Event-Based Optical Flow. (2022).Google ScholarGoogle Scholar
  167. Nir Shlezinger, Jay Whang, Yonina C Eldar, and Alexandros G Dimakis. 2023. Model-based deep learning. Proc. IEEE (2023).Google ScholarGoogle ScholarCross RefCross Ref
  168. Mennatullah Siam, Sepehr Valipour, Martin Jagersand, and Nilanjan Ray. 2017. Convolutional gated recurrent networks for video segmentation. In 2017 IEEE International Conference on Image Processing (ICIP). 3090–3094.Google ScholarGoogle ScholarDigital LibraryDigital Library
  169. Shay Snyder, Hunter Thompson, Md Abdullah-Al Kaiser, Gregory Schwartz, Akhilesh Jaiswal, and Maryam Parsa. 2023. Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion Problem for Event-based Cameras. arXiv:2303.14114Google ScholarGoogle Scholar
  170. Lea Steffen, Daniel Reichard, Jakob Weinland, Jacques Kaiser, Arne Roennau, and Rüdiger Dillmann. 2019. Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms. Frontiers in Neurorobotics 13 (2019).Google ScholarGoogle Scholar
  171. Yunjae Suh, Seungnam Choi, Masamichi Ito, Jeongseok Kim, Youngho Lee, Jong-Mo Seo, Heejae Jung, Dong-Hee Yeo, Seol Namgung, Jong-Su Bong, Sehoon Yoo, Seung-Hun Shin, Doowon Kwon, Pilkyu Kang, Seokho Kim, Hoonjoo Na, Kihyun Hwang, Chang-Woo Shin, Jun-Seok Kim, Paul K. J. Park, Joonseok Kim, Hyunsurk Eric Ryu, and Yongin Park. 2020. A 1280×960 Dynamic Vision Sensor with a 4.95-μm Pixel Pitch and Motion Artifact Minimization. 2020 IEEE International Symposium on Circuits and Systems (ISCAS) (2020), 1–5. https://api.semanticscholar.org/CorpusID:224958352Google ScholarGoogle ScholarCross RefCross Ref
  172. Haixin Sun, Minh-Quan Dao, and Vincent Fremont. 2022. 3D-FlowNet: Event-based optical flow estimation with 3D representation. In 2022 IEEE Intelligent Vehicles Symposium (IV). 1845–1850. https://doi.org/10.1109/IV51971.2022.9827380Google ScholarGoogle ScholarDigital LibraryDigital Library
  173. JS Taube, RU Muller, and JB Ranck. 1990. Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. Journal of Neuroscience 10, 2 (1990), 420–435.Google ScholarGoogle ScholarCross RefCross Ref
  174. Mohammad-Hassan Tayarani-Najaran and Michael Schmuker. 2021. Event-based sensing and signal processing in the visual, auditory, and olfactory domain: a review. Frontiers in Neural Circuits 15 (2021), 610446.Google ScholarGoogle ScholarCross RefCross Ref
  175. Zachary Teed and Jia Deng. 2021. RAFT: Recurrent All-Pairs Field Transforms for Optical Flow (Extended Abstract). ArXiv (08 2021), 4839–4843. https://doi.org/10.24963/ijcai.2021/662Google ScholarGoogle ScholarCross RefCross Ref
  176. Justin Thomas, Giuseppe Loianno, Joseph Polin, Koushil Sreenath, and Vijay Kumar. 2014. Toward autonomous avian-inspired grasping for micro aerial vehicles*. Bioinspiration & Biomimetics 9, 2 (may 2014), 025010.Google ScholarGoogle ScholarCross RefCross Ref
  177. Jesus Tordesillas and Jonathan P. How. 2022. PANTHER: Perception-Aware Trajectory Planner in Dynamic Environments. IEEE Access 10(2022), 22662–22677. https://doi.org/10.1109/access.2022.3154037Google ScholarGoogle ScholarCross RefCross Ref
  178. Roger D Traub, Miles A Whittington, and Mark O Cunningham. 2022. Simulation of oscillatory dynamics induced by an approximation of grid cell output. Reviews in the Neurosciences(2022).Google ScholarGoogle Scholar
  179. Konstantinos Tsintotas, Loukas Bampis, and Antonios Gasteratos. 2022. The revisiting problem in simultaneous localization and mapping: A survey on visual loop closure detection. IEEE T. on Intelligent Transportation Systems 23, 11 (2022), 1–12.Google ScholarGoogle Scholar
  180. Anup Vanarse, Adam Osseiran, and Alexander Rassau. 2019. Neuromorphic engineering—A paradigm shift for future im technologies. IEEE Instrumentation & Measurement Magazine 22, 2 (2019), 4–9.Google ScholarGoogle ScholarCross RefCross Ref
  181. Anup Vanarse, Adam Osseiran, Alexander Rassau, and Peter van der Made. 2019. A hardware-deployable neuromorphic solution for encoding and classification of electronic nose data. Sensors 19, 22 (2019), 4831.Google ScholarGoogle ScholarCross RefCross Ref
  182. Valentina Vasco, Arren Glover, and Chiara Bartolozzi. 2016. Fast event-based Harris corner detection exploiting the advantages of event-driven cameras. 4144–4149. https://doi.org/10.1109/IROS.2016.7759610Google ScholarGoogle ScholarDigital LibraryDigital Library
  183. Ziwei Wang, Yonhon Ng, Cedric Scheerlinck, and Robert Mahony. 2021. An asynchronous kalman filter for hybrid event cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 448–457.Google ScholarGoogle ScholarCross RefCross Ref
  184. David Weikersdorfer, David B. Adrian, Daniel Cremers, and Jörg Conradt. 2014. Event-based 3D SLAM with a depth-augmented dynamic vision sensor. In 2014 IEEE International Conference on Robotics and Automation (ICRA). 359–364.Google ScholarGoogle ScholarCross RefCross Ref
  185. David Weikersdorfer and Jörg Conradt. 2012. Event-based particle filtering for robot self-localization. In 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO). 866–870. https://doi.org/10.1109/ROBIO.2012.6491077Google ScholarGoogle ScholarCross RefCross Ref
  186. David Weikersdorfer, Raoul Hoffmann, and Jorg Conradt. 2013. Simultaneous Localization and Mapping for Event-Based Vision Systems, Vol.  7963. 133–142. https://doi.org/10.1007/978-3-642-39402-7_14Google ScholarGoogle ScholarDigital LibraryDigital Library
  187. P.J. Werbos. 1990. Backpropagation through time: what it does and how to do it. Proc. IEEE 78, 10 (1990), 1550–1560.Google ScholarGoogle ScholarCross RefCross Ref
  188. James CR Whittington, David McCaffary, Jacob JW Bakermans, and Timothy EJ Behrens. 2022. How to build a cognitive map. Nature Neuroscience (2022), 1–16.Google ScholarGoogle Scholar
  189. Rey Reza Wiyatno, Anqi Xu, and Liam Paull. 2022. Lifelong Topological Visual Navigation. IEEE Robotics and Automation Letters 7, 4 (2022), 9271–9278. https://doi.org/10.1109/LRA.2022.3189164Google ScholarGoogle ScholarCross RefCross Ref
  190. S Wu, KYM Wong, CCA Fung, Y Mi, and W Zhang. 2016. Continuous Attractor Neural Networks: Candidate of a Canonical Model for Neural Information Representation. F1000Research 5, 156 (2016).Google ScholarGoogle Scholar
  191. L. Xu and E. Oja. 1993. Randomized Hough Transform (RHT): Basic Mechanisms, Algorithms, and Computational Complexities. CVGIP: Image Understanding 57, 2 (1993), 131–154. https://doi.org/10.1006/ciun.1993.1009Google ScholarGoogle ScholarDigital LibraryDigital Library
  192. Chuankui Yan, Rubin Wang, Jingyi Qu, and Guanrong Chen. 2016. Locating and navigation mechanism based on place-cell and grid-cell models. Cognitive Neurodynamics 10 (2016), 353–360.Google ScholarGoogle ScholarCross RefCross Ref
  193. Yijia Yan, Neil Burgess, and Andrej Bicanski. 2021. A model of head direction and landmark coding in complex environments. PLoS computational biology 17, 9 (2021), e1009434.Google ScholarGoogle Scholar
  194. Jawad N. Yasin, Sherif A. S. Mohamed, Mohammad-hashem Haghbayan, Jukka Heikkonen, Hannu Tenhunen, Muhammad Mehboob Yasin, and Juha Plosila. 2020. Night vision obstacle detection and avoidance based on Bio-Inspired Vision Sensors. In 2020 IEEE SENSORS. 1–4. https://doi.org/10.1109/SENSORS47125.2020.9278914Google ScholarGoogle ScholarCross RefCross Ref
  195. Chengxi Ye, Anton Mitrokhin, Cornelia Fermüller, James A Yorke, and Yiannis Aloimonos. 2020. Unsupervised learning of dense optical flow, depth and egomotion with event-based sensors. In IROS. IEEE, 5831–5838.Google ScholarGoogle Scholar
  196. Zihan Yin, Md Abdullah-Al Kaiser, Lamine Ousmane Camara, Mark Camarena, Maryam Parsa, Ajey Jacob, Gregory Schwartz, and Akhilesh Jaiswal. 2023. IRIS: Integrated Retinal Functionality in Image Sensors. Frontiers in Neuroscience 17 (2023).Google ScholarGoogle Scholar
  197. Khalid Yousif, Alireza Bab-Hadiashar, and Reza Hoseinnezhad. 2015. An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics. Intelligent Industrial Systems 1 (11 2015). https://doi.org/10.1007/s40903-015-0032-7Google ScholarGoogle ScholarCross RefCross Ref
  198. Fangwen Yu, Jianga Shang, Youjian Hu, and Michael Milford. 2019. NeuroSLAM: a brain-inspired SLAM system for 3D environments. Biological Cybernetics 113 (2019), 515 – 545.Google ScholarGoogle ScholarCross RefCross Ref
  199. Jiqing Zhang, Bo Dong, Haiwei Zhang, Jianchuan Ding, Felix Heide, Baocai Yin, and Xin Yang. 2022. Spiking Transformers for Event-Based Single Object Tracking. In Conf. on Comp. Vision and Pattern Recognition (CVPR). 8801–8810.Google ScholarGoogle Scholar
  200. Dongye Zhao, Bailu Si, and Xiaoli Li. 2021. Learning allocentric representations of space for navigation. Neurocomputing 453(2021), 579–589. https://doi.org/10.1016/j.neucom.2020.10.013Google ScholarGoogle ScholarDigital LibraryDigital Library
  201. Dongye Zhao, Zheng Zhang, Hong Lu, Sen Cheng, Bailu Si, and Xisheng Feng. 2022. Learning Cognitive Map Representations for Navigation by Sensory–Motor Integration. IEEE Transactions on Cybernetics 52, 1 (2022), 508–521.Google ScholarGoogle ScholarCross RefCross Ref
  202. Junwei Zhao, Shiliang Zhang, Lei Ma, Zhaofei Yu, and Tiejun Huang. 2022. SpikingSIM: A Bio-Inspired Spiking Simulator. In 2022 IEEE International Symposium on Circuits and Systems (ISCAS). 3003–3007.Google ScholarGoogle Scholar
  203. Yi Zhou, Guillermo Gallego, Henri Rebecq, Laurent Kneip, Hongdong Li, and Davide Scaramuzza. 2018. Semi-dense 3D reconstruction with a stereo event camera. In Proceedings of the European conference on computer vision (ECCV). 235–251.Google ScholarGoogle ScholarDigital LibraryDigital Library
  204. Yi Zhou, Guillermo Gallego, and Shaojie Shen. 2021. Event-Based Stereo Visual Odometry. IEEE Transactions on Robotics 37, 5 (2021), 1433–1450. https://doi.org/10.1109/TRO.2021.3062252Google ScholarGoogle ScholarCross RefCross Ref
  205. Alex Zhu, Dinesh Thakur, Tolga Özaslan, Bernd Pfrommer, Vijay Kumar, and Kostas Daniilidis. 2018. The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Rob. and Autom. Letters(2018), 2032–2039.Google ScholarGoogle Scholar
  206. Alex Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. 2018. Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion. In Conf. on Comp. Vision and Pattern Recognition (CVPR). arXiv:1812.08156.Google ScholarGoogle Scholar
  207. Alex Zihao Zhu, Nikolay Atanasov, and Kostas Daniilidis. 2017. Event-Based Visual Inertial Odometry. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5816–5824. https://doi.org/10.1109/CVPR.2017.616Google ScholarGoogle ScholarCross RefCross Ref
  208. Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. 2018. EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras. ArXiv abs/1802.06898(2018).Google ScholarGoogle Scholar
  209. Yi–Fan Zuo, Jiaqi Yang, Jiaben Chen, Xia Wang, Yifu Wang, and Laurent Kneip. 2022. DEVO: Depth-Event Camera Visual Odometry in Challenging Conditions. In 2022 International Conference on Robotics and Automation (ICRA). 2179–2185.Google ScholarGoogle Scholar

Index Terms

  1. Neuromorphic Perception and Navigation for Mobile Robots: A Review

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Computing Surveys
          ACM Computing Surveys Just Accepted
          ISSN:0360-0300
          EISSN:1557-7341
          Table of Contents

          Copyright © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Online AM: 9 April 2024
          • Accepted: 7 March 2024
          • Revised: 3 November 2023
          • Received: 31 March 2023

          Check for updates

          Qualifiers

          • survey
        • Article Metrics

          • Downloads (Last 12 months)192
          • Downloads (Last 6 weeks)192

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader