LisfrancNet: An Advanced DenseNet-Based Deep Learning Model for Automated Detection of Lisfranc Injuries on Radiographs
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background : Lisfranc injuries are frequently misdiagnosed on conventional radiographs, with misdiagnosis rates reaching 20%. While advanced imaging modalities offer higher diagnostic accuracy, their widespread application is limited by accessibility and cost constraints. Methods : This retrospective study analyzed 600 anteroposterior foot radiographs (300 with Lisfranc injuries, 300 normal) from a single center. Images were preprocessed and standardized to 512×512 pixels. The proposed LisfrancNet model integrated Squeeze-and-Excitation modules and heterogeneous convolutions into a DenseNet architecture. Model performance was evaluated using sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC). Results : The LisfrancNet model achieved 91.5% accuracy (95% CI: 89.2%-93.8%) and 97.16% AUC (95% CI: 95.8%-98.5%) on the standardized dataset, representing a 5% improvement over the original DenseNet. The model demonstrated superior sensitivity (90.67%) and specificity (92.33%) compared to traditional radiographic assessment (65.4% sensitivity). Pretraining significantly enhanced model performance across all metrics. Conclusions : The LisfrancNet model demonstrates promising performance in automated detection of Lisfranc injuries on single-view radiographs, potentially offering an efficient and cost-effective screening tool for primary healthcare settings. Multi-center validation studies are needed to confirm its generalizability in diverse clinical scenarios.