Analysis of Performance Differences Between Self-Developed Fundus Surgery Image Evaluation Software and Kinovea in Ophthalmic Microsurgery
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Objective This study systematically compares the performance of the self-developed "Fundus Surgery Image Evaluation Software" and the open-source tool Kinovea in quantifying key parameters of ophthalmic microsurgical operations. Methods The self-developed "Fundus Surgery Image Evaluation Software" integrates modules for video cropping, key point annotation, reference frame verification, and automatic output, enabling simultaneous measurement of needle insertion depth and tremor amplitude. The performance of this software was evaluated through validation experiments (in vitro models, ex vivo porcine eyes, and clinical surgical videos) and compared with that of Kinovea. For the in vitro models, standard needle insertion depths of 300–700 µm and standard tremor values of 50–250 µm were set; ex vivo porcine eyes were used to simulate an environment similar to the human retinal environment, with measurements of needle insertion depth and tremor values within the same ranges; and clinical surgical videos were analyzed based on the dynamic retinal background in living subjects. The main measurement indicators included needle insertion depth, as well as the mean, maximum, minimum, median, and variance of tremor. The measurement deviation and stability of the two software tools were compared. Results In all experimental scenarios, the measurement results of the self-developed "Fundus Surgery Image Evaluation Software" were significantly superior to those of Kinovea. In the in vitro scenario, the needle insertion error of the self-developed software was < 0.5%, while Kinovea exhibited a systematic positive bias of 40%–68%. For the tremor segment, the self-developed software showed a slight underestimation but could identify tremors throughout the entire process, whereas Kinovea barely detected tremors (mean value < 5 µm). In the ex vivo porcine eye experiment, the needle insertion deviation of the self-developed software was ≤ ± 20%, the mean tremor value was < 160 µm, and stability was maintained across segments. In contrast, Kinovea consistently overestimated the needle insertion depth and produced an extreme value of 1,500 µm; its tremor measurement was interfered with by background textures, leading to a 2–8-fold overestimation and even an abnormal spike of 35,618 µm. In the clinical video analysis, the needle insertion depth measured by the self-developed software ranged from 198 to 934 µm, with a mean tremor value < 250 µm and a maximum tremor value < 800 µm, all falling within the safe range. For Kinovea, the maximum needle insertion depth reached 2,107 µm and the maximum tremor value was 2,101 µm; 8 out of 10 cases showed a systematic overestimation of 1.2–4.6 folds. Conclusion The self-developed "Fundus Surgery Image Evaluation Software" maintains high accuracy, low dispersion, and values within the clinically acceptable range under blank, tissue-based, and in vivo complex background conditions, making it safe for application in the quantitative evaluation of retinal microsurgery. Due to limitations in template matching, Kinovea exhibits significant systematic biases and extreme abnormalities, and thus is not suitable for direct use in the measurement of ophthalmic microsurgical procedures.