Sketch Recognition Using Mamba Model for Computer Vision Tasks
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Sketch recognition involves classifying and retrieving hand-drawn sketches. Traditional deep learning models like CNNs, RNNs, and transformers often struggle due to unbalanced datasets and poor generalization across sketch styles and categories. These limitations hinder the development of effective systems. To address these challenges, we propose Mamba, a novel deep learning framework for sketch classification and retrieval. Mamba integrates CNNs for feature extraction, RNNs for capturing temporal dependencies, and a dedicated Mamba module that enhances visual attention mechanisms and feature activation mapping. Our dynamic refinement of sketch representations improves generalization and adaptability. We then trained on large-scale datasets such as QuickDraw, TU-Berlin, and SketchyScene. Mamba outperforms existing methods in terms of recognition accuracy, robustness, and interpretability. Our evaluations demonstrate that Mamba not only improves classification precision but also provides valuable insights into feature attribution. This framework is a promising approach for real-world sketch recognition applications, highlighting the importance of structured feature representation learning and attention mechanisms to enhance reliability, ease of use, and performance in sketch-based computer vision.