BackgroundThis study aims to address the gap in understanding clinicians' attitudes toward explainable AI (XAI) methods applied to machine learning models using tabular data, commonly found in clinical settings. It specifically explores clinicians' perceptions of different XAI methods from the ALFABETO project, which predicts COVID-19 patient hospitalization based on clinical, laboratory, and chest X-ray at time of presentation to the Emergency Department. The focus is on two cognitive dimensions: understandability and actionability of the explanations provided by explainable-by-design and post-hoc methods.MethodsA questionnaire-based experiment was conducted with 10 clinicians from the IRCCS Policlinico San Matteo Foundation in Pavia, Italy. Each clinician evaluated 10 real-world cases, rating predictions and explanations from three XAI tools: Bayesian networks, SHapley Additive exPlanations (SHAP), and AraucanaXAI. Two cognitive statements for each method were rated on a Likert scale, as well as the agreement with the prediction. Two clinicians answered the survey during think-aloud interviews.ResultsClinicians demonstrated generally positive attitudes toward AI, but high compliance rates (86% on average) indicate a risk of automation bias. Understandability and actionability are positively correlated, with SHAP being the preferred method due to its simplicity. However, the perception of methods varies according to specialty and expertise.ConclusionsThe findings suggest that SHAP and AraucanaXAI are promising candidates for improving the use of XAI in clinical decision support systems (DSSs), highlighting the importance of clinicians' expertise, specialty, and setting on the selection and development of supportive XAI advice. Finally, the study provides valuable insights into the design of future XAI DSSs.
Which explanations do clinicians prefer? A comparative evaluation of XAI understandability and actionability in predicting the need for hospitalization
Bergomi L.;Nicora G.;Orlowska M. A.;Podrecca C.;Bellazzi R.;Fregosi C.;Salinaro F.;Bonzano M.;Crescenzi G.;Speciale F.;Di Pietro S.;Zuccaro V.;Asperges E.;Valsecchi P.;Pagani E.;Catalano M.;Bortolotto C.;Preda L.;Parimbelli E.
2025-01-01
Abstract
BackgroundThis study aims to address the gap in understanding clinicians' attitudes toward explainable AI (XAI) methods applied to machine learning models using tabular data, commonly found in clinical settings. It specifically explores clinicians' perceptions of different XAI methods from the ALFABETO project, which predicts COVID-19 patient hospitalization based on clinical, laboratory, and chest X-ray at time of presentation to the Emergency Department. The focus is on two cognitive dimensions: understandability and actionability of the explanations provided by explainable-by-design and post-hoc methods.MethodsA questionnaire-based experiment was conducted with 10 clinicians from the IRCCS Policlinico San Matteo Foundation in Pavia, Italy. Each clinician evaluated 10 real-world cases, rating predictions and explanations from three XAI tools: Bayesian networks, SHapley Additive exPlanations (SHAP), and AraucanaXAI. Two cognitive statements for each method were rated on a Likert scale, as well as the agreement with the prediction. Two clinicians answered the survey during think-aloud interviews.ResultsClinicians demonstrated generally positive attitudes toward AI, but high compliance rates (86% on average) indicate a risk of automation bias. Understandability and actionability are positively correlated, with SHAP being the preferred method due to its simplicity. However, the perception of methods varies according to specialty and expertise.ConclusionsThe findings suggest that SHAP and AraucanaXAI are promising candidates for improving the use of XAI in clinical decision support systems (DSSs), highlighting the importance of clinicians' expertise, specialty, and setting on the selection and development of supportive XAI advice. Finally, the study provides valuable insights into the design of future XAI DSSs.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


