Automated information extraction from plant specimen labels using OCR and large language models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The digitization of herbarium specimens is crucial for advancing biodiversity research and data sharing. However, this process is often hindered by the inefficiency of manual transcription and the technical challenges posed by the massive volume of specimens, heterogeneous label layouts, and the prevalence of handwritten texts. To overcome these bottlenecks, this study proposed an automated pipeline that integrates the PadddleOCR engine with the DeepSeek large language model (LLM) for structured information extraction from specimen labels.
The pipeline is designed to extract 16 key metadata fields from both printed and handwritten labels. Evaluated on a benchmark dataset, it achieved a high field-level accuracy of 95.4% for printed labels, demonstrating strong reliability. For handwritten labels, the system maintained functionality while correctly identifying its limitations through a confidence-based quality control mechanism. A key finding was the compensatory role of the LLM, which effectively corrected upstream OCR errors, as evidenced by a weak correlation ( r = 0.32) between OCR confidence and final extraction accuracy. This hybrid architecture ensures data security through local image processing and cost-efficiency via text-only LLM parsing.
This work provides a robust, scalable, and practical solution for accelerating the digitization of botanical collections. The method is directly applicable to real-world digitization workflows and promises to significantly enhance the efficiency of biodiversity data creation and sharing.