LoCS-Net: Localizing Convolutional Spiking Neural Network for Fast Visual Place Recognition

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Visual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-art VPR approaches based on artificial neural networks (ANNs) suffer from computational inefficiency. Spiking neural networks (SNNs), on the other hand, implemented on neuromorphic hardware, are reported to have remarkable potential towards more efficient solutions computationally, compared to ANNs. However, the training of the state-of-the-art (SOTA) SNNs for the VPR task is often intractable on large and diverse datasets. To address this, we develop an end-to-end convolutional SNN model for VPR, that leverages back-propagation for tractable training. Rate-based approximations of leaky integrate-and-fire (LIF) neurons are employed during training to enable back-propagation, and the approximation units are replaced with spiking LIF neurons during inference. The proposed method outperforms the SOTA ANNs and SNNs by achieving 78.2% precision at 100% recall on the challenging Nordland dataset, compared with 53% SOTA performance, and exhibits competitive performance on the Oxford RobotCar dataset while being easier to train and faster in both training and inference when compared to other ANN and SNN-based methods.

Article activity feed