Clinician-informed XAI evaluation checklist with metrics (CLIX-M) for AI-powered clinical decision support systems
1CSIRO's Australian eHealth Research Centre, Herston, QLD, Australia. aida.brankovic@csiro.au.
Related Experiment Videos
View abstract on PubMed
Summary
Clinical explainable AI (XAI) lacks evaluation standards. We created a clinician-informed checklist to standardize XAI reporting, aiming to boost trust and improve AI adoption in healthcare.
Area of Science:
- Medical Informatics
- Artificial Intelligence in Medicine
- Clinical Decision Support
Background:
- The proliferation of clinical explainable AI (XAI) models presents challenges due to undefined objectives and unrealistic expectations about their explanatory capabilities.
- A significant gap exists in standardized metrics for evaluating the effectiveness and reliability of XAI in clinical settings.
Purpose of the Study:
- To address the lack of standardized evaluation metrics for clinical explainable AI (XAI).
- To develop a foundational tool for the standardization and transparent reporting of XAI methods in healthcare.
Main Methods:
- A 14-item checklist was developed through a clinician-informed process.
- The checklist incorporates attributes related to clinical relevance, machine learning model characteristics, and decision-making processes.
Main Results:
- The study resulted in the creation of a novel, clinician-informed 14-item checklist for XAI evaluation.
- This checklist represents a crucial initial step towards establishing standardized reporting for XAI in clinical applications.
Conclusions:
- The developed checklist is the first step towards standardizing XAI evaluation and reporting in clinical practice.
- Standardized reporting of XAI methods is essential to enhance trust, mitigate risks, encourage AI adoption, and ultimately improve clinical decision-making and assess true clinical potential.