Impact of Iron Deficiency on Clinical Outcomes in Congestive Heart Failure: A Retrospective Analysis of Risk Stratification and Mortality

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Introduction

Iron deficiency frequently coexists with congestive heart failure, thereby increasing morbidity and mortality. Although guidelines typically define iron deficiency as ferritin under 100 ng/mL or 100–299 ng/mL plus transferrin saturation below 20%, precise stratification remains elusive. Intravenous iron therapy has demonstrated benefits, yet clear delineations for deficiency, adequacy, and hyperferritinemia are lacking.

Aim

This study aims to compare a machine learning–devised model vs. a clinically based model in predicting time-to-event outcomes in CHF, refining risk stratification when compared to current iron deficiency guidelines.

Material and Methods

A retrospective design included adults aged 18–89 with congestive heart failure hospitalized for at least one day, with iron panels obtained within 24 hours of admission. Electronic data were gathered by text-mining and standard record searches. Kolmogorov–Smirnov guided normality checks, and parametric or nonparametric tests were selected accordingly. Kaplan–Meier analysis and Cox regression assessed one-year mortality. The machine learning model (“final category tertile”) defined T1 as ferritin below 100 ng/mL or 100–299 ng/mL with transferrin saturation under 20%, T2 as ferritin 100–299 ng/mL with transferrin saturation 20% or higher, and T3 as ferritin above 300 ng/mL. The clinically based ferritin custom tertile model comprised T1: ferritin below 100 ng/mL, T2: ferritin 100–299 ng/mL, and T3: ferritin 300 ng/mL or above. Covariates such as age, shock, malignancy, eGFR, and other factors were included in multivariable models (α=0.05).

Results

Univariate Cox analyses indicated stronger hazard separation with the machine learning approach than with simpler, binary criteria as current guidelines suggest (p<0.001). T3 (hyperferritinemia) showed an increased mortality risk relative to T1, with T2 in an intermediate range. Multivariable models preserved the robust discriminative power of the machine learning–based strategy (T3 hazard ratio ≈1.84). Although the ferritin custom tertile classification was also significant (T3 hazard ratio ≈1.48), slightly lower early sensitivity was observed. Additional findings suggested that iron supplementation reduced hazard in deficiency strata, reinforcing the importance of precise categorization.

Conclusion

Refined iron-status stratification appears pivotal for risk assessment in congestive heart failure. The machine learning–based model provided clearer identification of high-risk subsets than standard cutoffs, differentiating deficiency, adequacy, and potential overload. Prospective validation may support adoption of nuanced models, enhance management strategies, and potentially reduce adverse outcomes.

Article activity feed