Humans versus Machines: Artificial intelligence performs similarly to manual analysis in benthic reef monitoring
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Tropical reefs are among the most biodiverse and threatened marine ecosystems and face both local and global stressors. Understanding community dynamics and ecosystem functioning under these pressures requires long-term monitoring. Image-based surveys are increasingly used in reef ecology because they enable rapid data acquisition and storage. However, manual identification of benthic organisms is time-consuming and potentially subjective, creating a bottleneck between image collection and ecological analysis. Automated annotation tools represent promising solutions, but few studies have directly compared their performance with that of manual approaches, especially under varying image quality conditions. Here, we assess the usefulness of the CoralNet automated tool for classifying benthic cover in tropical reef images and evaluate how image quality influences both manual and automated annotations. Using expert-validated images, we trained and tested both approaches, measuring performance with Cohen’s Kappa coefficient and modeling the effects of image imperfections. We found that manual and automated annotators achieved similar performance across most taxonomic and morphofunctional groups, and both were similarly affected by image quality. Furthermore, the most frequent imperfections were not the most influential. Our findings demonstrate that automated annotation is a reliable and efficient alternative to manual methods, with strong potential to enhance large-scale monitoring, biodiversity assessments, and conservation strategies in reef ecosystems.