Jove
Visualize
Contact Us
  1. Home
  2. Research Domains
  3. Information And Computing Sciences
  4. Artificial Intelligence
  5. Fuzzy Computation
  6. Clinician-informed Xai Evaluation Checklist With Metrics (clix-m) For Ai-powered Clinical Decision Support Systems

Clinician-informed XAI evaluation checklist with metrics (CLIX-M) for AI-powered clinical decision support systems

Aida Brankovic1,2, David Cook3, Jessica Rahman4

  • 1CSIRO's Australian eHealth Research Centre, Herston, QLD, Australia. aida.brankovic@csiro.au.

NPJ Digital Medicine|June 14, 2025

Related Experiment Videos

View abstract on PubMed

Summary

Clinical explainable AI (XAI) lacks evaluation standards. We created a clinician-informed checklist to standardize XAI reporting, aiming to boost trust and improve AI adoption in healthcare.

Area of Science:

  • Medical Informatics
  • Artificial Intelligence in Medicine
  • Clinical Decision Support

Background:

  • The proliferation of clinical explainable AI (XAI) models presents challenges due to undefined objectives and unrealistic expectations about their explanatory capabilities.
  • A significant gap exists in standardized metrics for evaluating the effectiveness and reliability of XAI in clinical settings.

Purpose of the Study:

  • To address the lack of standardized evaluation metrics for clinical explainable AI (XAI).
  • To develop a foundational tool for the standardization and transparent reporting of XAI methods in healthcare.

Main Methods:

  • A 14-item checklist was developed through a clinician-informed process.
  • The checklist incorporates attributes related to clinical relevance, machine learning model characteristics, and decision-making processes.

Main Results:

  • The study resulted in the creation of a novel, clinician-informed 14-item checklist for XAI evaluation.
  • This checklist represents a crucial initial step towards establishing standardized reporting for XAI in clinical applications.

Conclusions:

  • The developed checklist is the first step towards standardizing XAI evaluation and reporting in clinical practice.
  • Standardized reporting of XAI methods is essential to enhance trust, mitigate risks, encourage AI adoption, and ultimately improve clinical decision-making and assess true clinical potential.

Related Experiment Videos

Related Concept Videos

JoVE
x logofacebook logolinkedin logoyoutube logo
ABOUT JoVE
OverviewLeadershipBlogJoVE Help Center
AUTHORS
Publishing ProcessEditorial BoardScope & PoliciesPeer ReviewFAQSubmit
LIBRARIANS
TestimonialsSubscriptionsAccessResourcesLibrary Advisory BoardFAQ
RESEARCH
JoVE JournalMethods CollectionsJoVE Encyclopedia of ExperimentsArchive
EDUCATION
JoVE CoreJoVE BusinessJoVE Science EducationJoVE Lab ManualFaculty Resource CenterFaculty Site

Terms & Conditions of Use
Privacy Policy
Policies