A Lightweight Image Hashing Retrieval Method Based on Hybrid Neural Networks and Asymmetric Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Image hashing has become an essential technique for efficient large-scale image retrieval. However, existing Transformer-based hashing methods often suffer from high computational complexity and limited feature extraction, which restrict retrieval accuracy. In this paper, we propose a novel image hashing model that integrates lightweight hybrid neural networks with an asymmetric learning strategy. First, we design a hybrid backbone that combines the Convolutional Neural Network (CNN) for local feature extraction with Transformers for global relationship modeling, enabling more effective and discriminative image representations. Second, an External Attention Network (EAN) module is introduced, which leverages shared external memory units to model cross-sample dependencies and thus capture dataset-level discriminative features, further enhancing the model’s feature extraction capability. Finally, we propose a novel asymmetric loss function that exploits supervised information from both the training set and the database, which accelerates convergence and facilitates the generation of high-quality hash codes. Extensive experiments conducted on two public benchmarks, CIFAR-10 and NUS-WIDE, demonstrate that our method achieves superior retrieval performance compared with state-of-the-art approaches. The code for the model proposed in this paper is available at https://github.com/sulanqing/HALH/tree/main.

Article activity feed