An Attention-Guided and Swarm-Optimized Hybrid Deep Learning Framework for Robust Medicinal Plant Identification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate and efficient identification of medicinal plants is essential for advancing botanical research, traditional medicine, and pharmaceutical development. Conventional manual identification methods are often labor-intensive, subjective, and reliant on expert knowledge, limiting their scalability and consistency. Moreover, existing methods often face challenges such as low accuracy, inability to distinguish between morphologically similar species, and inefficiency in identifying the medicinal plants. To overcome these challenges, this study investigates three progressively advanced deep learning models for automated classification of medicinal plants using a publicly available benchmark dataset. The first model employs a standard Convolutional Neural Network (CNN) architecture, providing a robust baseline for visual feature extraction from plant images. Building upon this, the second model, MedLeaf-ViT, captures both local features and global contextual information, thereby enhancing the model’s capability to discern fine-grained details such as leaf venation, shape, and color patterns. To further improve classification performance, the study proposes PhytoSwarmViTNet, a hybrid framework that integrates abio-inspired optimization techniquesWolfFly Optimizer for hyperparameter tuning and feature selection. This nature-inspired optimization algorithm facilitates faster convergence and improved generalization, mitigating overfitting and boosting classification accuracy. These approaches were trained and validated using the Indian Medicinal Leaves Dataset, implemented on the Python platform. The experimental evaluation demonstrates that the proposed model achieved an accuracy of 99.6%, precision of 98.8%, recall of 98.7%, and an F1-score of 98.2% and improved recognition performance compared to the other methods in terms of robustness, and generalization across diverse plant classes. The proposed hybrid models exhibit strong potential for real-world deployment in mobile and edge computing environments, providing scalable and reliable tools for botanical research and healthcare applications.

Article activity feed