Convolutional Neural Networks for Burn Segmentation and Classification Tasks Using RGB Photographs: A Five-Year Systematic Review

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Burn wound assessment remains complex, with visual accuracy often below 50% among non-specialists. Convolutional neural networks (CNNs) offer promising solutions, achieving 68.9%–95.4% accuracy in depth classification and 76.0%–99.4% in area segmentation. This review systematically evaluates CNN-based burn area segmentation (BAS), burn depth classification (BDC), and burn depth segmentation (BDS) using RGB photographs.

Methods

A systematic search of PubMed, Medline, Embase, and Cochrane Library (January 2020–April 2025) was conducted on 1st April 2025. Studies applying CNNs to RGB images for one or more of the prediction tasks (BAS, BDC, or BDS) were included. Risk of bias was assessed using PROBAST+AI. Data on model architecture, datasets, and performance metrics were extracted and synthesized narratively.

Result

A total of 14 studies were included. Among BAS tasks, six reported accuracy above 90%, and five reported Dice Coefficient over 0.8. The combination of ResNet-101 and ASPP provides a strong and stable baseline across studies. In BDC tasks, four reported an accuracy above 80%, and all six reported an F1 score above 0.73. The top two best-performing models employed feature enhancement strategies to achieve accuracy up to 98% and F1 score of 0.97. In BDS tasks, low-quality data and inconsistent annotation were observed to negatively affect model performance.

Conclusion

CNN-based models show strong potential in burn wound analysis using RGB images. However, considerable heterogeneity remains, and future studies should prioritise head-to-head comparisons and multicentre validation to strengthen model generalisability.

Article activity feed