The SAIGE Framework for Risk Stratification in Spine Surgery
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI and ML in spine surgery has both transformative possibilities and notable challenges related to regulatory oversight, algorithmic bias, and clinical responsibility. We propose a governance model to tackle these important issues, ensuring the responsible use of AI tools. This framework introduces the SAIGE-R Index, a tool designed to measure AI system risks based on Clinical Volatility, System Integration Risk, and Data Integrity Confidence. This index supports a tiered oversight system, ranging from minimal checks for low-risk systems to thorough FDA reviews for high-risk applications. In addition, SAIGE sets specific validation standards focused on spine surgery outcomes. These include important differences in patient-reported measures and accuracy in pedicle screw placement, along with quarterly fairness checks to reduce demographic bias. The framework also describes a strong governance structure that focuses on ongoing clinician training, involvement from multiple stakeholders, and strict data security measures. It suggests a liability model that matches responsibility with the evaluated risk level of AI tools. By addressing validation, ethics, and accountability, the SAIGE Framework provides a foundation for safely and effectively incorporating AI into complex surgical settings. This approach encourages innovation while maintaining patient safety and clinical integrity.