Trustworthy AI in Digital Health: A Comprehensive Review of Robustness and Explainability

This article has been Reviewed by the following groups

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Abstract

Ensuring trust in AI systems is essential for the safe and ethical integration of machine learning systems into high-stakes domains such as digital health. Key dimensions, including robustness, explainability, fairness, accountability, and privacy, need to be addressed throughout the AI lifecycle, from problem formulation and data collection to model deployment and human interaction. While various contributions address different aspects of trustworthy AI, a focused synthesis on robustness and explainability, especially tailored to the healthcare context, remains limited. This review addresses that need by organizing recent advancements into an accessible framework, highlighting both technical and practical considerations. We present a structured overview of methods, challenges, and solutions, aiming to support researchers and practitioners in developing reliable and explainable AI solutions for digital health. This review article is organized into three main parts. First, we introduce the pillars of trustworthy AI and discuss the technical and ethical challenges, particularly in the context of digital health. Second, we explore application-specific trust considerations across domains such as intensive care, neonatal health, and metabolic health, highlighting how robustness and explainability support trust. Lastly, we present recent advancements in techniques aimed at improving robustness under data scarcity and distributional shifts, as well as explainable AI methods ranging from feature attribution to gradient-based interpretations and counterfactual explanations. This paper is further enriched with detailed discussions of the contributions toward robustness and explainability in digital health, the development of trustworthy AI systems in the era of LLMs, and various evaluation metrics for measuring trust and related parameters such as validity, fidelity, and diversity.

Article activity feed

  1. This Zenodo record is a permanently preserved version of a Structured PREreview. You can view the complete PREreview at https://prereview.org/reviews/17340450.

    Does the introduction explain the objective of the research presented in the preprint? Yes The Introduction clearly explains what the objective of the research is. What's the meaning of the objective and how the parameters of the objective are important for AI to integrate into Digital Health effectively. By using examples and images, the importance of the parameters of the objective is explained and the drawbacks of not fulfilling the objective of the research in integrating AI into Digital Health. Mentioning the loop holes in previous review articles, it explains clearly the objective of this research.
    Are the methods well-suited for this research? Somewhat appropriate Reviewing previous reviews and finding the loop holes in them, ensuring that does loop holes are resolved in the current research. I think, the effective comparing of how the previous conclusions of the reviews were, and how now better results are seen in this research proves the fact that methods were well suited. Explaining Robustness and explainability through various methods of evaluation metrics of Trust, fidelity, proximity, validity, etc, was effective. I think a little more work could be done in the methods section.
    Are the conclusions supported by the data? Somewhat supported The conclusion sums up the whole idea of the research. It does support the data provided to explain the objectives of the research. Explaining what robustness and explainability means and how it can be achieved in current AI models is well explained. Somewhat more explanation could have been given on the future of AI in digital health to sum up the conclusion effectively.
    Are the data presentations, including visualizations, well-suited to represent the data? Highly appropriate and clear The data presentations are excellent, using figures and tables to explain each and every part of the research was very thoughtful. It helps in summing up the data, so that it can be visualized better and understood effectively instead of just reading plain text.
    How clearly do the authors discuss, explain, and interpret their findings and potential next steps for the research? Very clearly The authors have clearly explained the objectives of the research by giving information on the past reviews and the loop holes in it. And, how their current research has proven effective in resolving the loop holes. By using figures and tables, the findings of their data can be interpreted easily, thus, proving their point of carrying out the research. Explaining the advancements of current LLM models in healthcare, also talking about each and every domain of health and the issues in it and how they can be solved through their research was effective. Thus, giving a view point for the next steps of AI in Digital Health.
    Is the preprint likely to advance academic knowledge? Highly likely Yep, I highly agree that this preprint is beneficial for further development and research of AI in Digital health, as I have already mentioned how above.
    Would it benefit from language editing? No No such language editing is needed.
    Would you recommend this preprint to others? Yes, it's of high quality The preprint is very useful to understand Trustworthy AI and how such AI models can be developed further in the future so that they integrate effectively in Healthcare alongside Healthcare professionals.
    Is it ready for attention from an editor, publisher or broader audience? Yes, as it is Yes, I think the preprint is ready.

    Competing interests

    The author declares that they have no competing interests.

    Use of Artificial Intelligence (AI)

    The author declares that they did not use generative AI to come up with new ideas for their review.

  2. This Zenodo record is a permanently preserved version of a Structured PREreview. You can view the complete PREreview at https://prereview.org/reviews/17107960.

    Does the introduction explain the objective of the research presented in the preprint? Yes The introduction briefly explains trustworthy AI and existing frameworks. It also clearly justifies the focus on trustworthy components, robustness and explainability, of AI in healthcare with real world scenarios/applications.
    Are the methods well-suited for this research? Somewhat appropriate Although authors provide a comprehensive review they do not describe how the review was conducted and evaluated for risk of bias.
    Are the conclusions supported by the data? Highly supported Authors conclusions are supported by a thorough presentation of their synthesis with figures, tables and examples that comprehensively tackle the study objective.
    Are the data presentations, including visualizations, well-suited to represent the data? Highly appropriate and clear Visuals presented are easy to comprehend/interpret and support the narrative. A key describing color should be included in Figure 3 to aid interpretability.
    How clearly do the authors discuss, explain, and interpret their findings and potential next steps for the research? Somewhat clearly Authors thoroughly discuss and interpret their findings using examples, figures and summary tables. Authors should justify proposed taxonomy to better aid understanding and utility by researchers and practitioners.
    Is the preprint likely to advance academic knowledge? Somewhat likely This preprint justifies the importance of robustness and explainability of AI applications in healthcare. It also gives an in-depth overview of methods, evaluation metrics and examples of robustness and explainability of AI applications in healthcare. Publication would have been stronger if authors identified gaps or recommended opportunities to advance robustness and explainability of AI applications in healthcare.
    Would it benefit from language editing? No
    Would you recommend this preprint to others? Yes, it's of high quality
    Is it ready for attention from an editor, publisher or broader audience? Yes, after minor changes 1. Authors should consider including details on how comprehensive review was conducted and evaluated for risk of bias to aid transparency and comparability 2. Author should consider including a section identifying gaps/opportunities or providing recommendations to guide researchers and practitioners in developing robust and explainable AI solutions. One such recommendation could be for research to identify/confirm the multiplicative gains in performance and/or utility of AI applications when both robustness and explainability of AI applications are achieved.

    Competing interests

    The author declares that they have no competing interests.