Applied Image Recognition for Identifying RiskFactors in OR Nursing Practice

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In response to the thematic focus of the Frontiers in Computer Science—Computer Vision section on advancing vision andimage analysis technology in practical real-world domains including medical imaging, our study addresses the critical need forautomated identification of peri-operative risk factors in operating room nursing practice. Traditional approaches in OR nursingprimarily rely on manual surveillance, checklists, and post-hoc review, which are prone to human oversight, lack real-timeresponsiveness, and struggle with scalability across diverse visual contexts. Our proposed RoLIE system leverages a deeplearning-based image recognition framework trained on annotated OR imagery to detect predefined risk-related objects andnurse–environment interactions indicative of hazard potential. Utilizing convolutional neural networks fine-tuned on our curateddataset, our method automatically flags safety violations such as misplaced instruments, improper positioning of personnel, andenvironmental clutter. The system demonstrates superior real-time performance, generalizes across varying surgical scenes,and reduces detection latency compared to conventional manual methods. Experimental evaluation shows detection accuracyexceeding 90% across key risk categories, with faster generation and reduced false negatives. By combining state-of-the-artcomputer vision techniques directly aligned with the journal’s emphasis on image analysis and object recognition in healthcarecontexts, RoLIE offers a robust, scalable, and automated tool designed to enhance patient safety and support evidence-basednursing workflows in the operating room.

Article activity feed