Have a personal or library account? Click to login
A Study Comparing Explainability Methods: A Medical User Perspective Cover

A Study Comparing Explainability Methods: A Medical User Perspective

Open Access
|Jun 2025

References

  1. ARRIETA, A.B. – DÍAZ-RODRÍGUEZ, N. – DEL SER, J. – BENNETOT, A. – TABIK, S. – BARBADO, A. – GARCIA, S. – GIL-LOPEZ, S. – MOLINA, D. – BENJAMINS, R. – CHATILA, R. – HERRERA, F.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, vol. 58, pp. 82–115, 2020.
  2. BRENNEN, A.: What do people really want when they say they want ‘explainable AI?’ we asked 60 stakeholders, Conference on Human Factors in Computing Systems - Proceedings, Apr. 2020.
  3. LANGER, M. – OSTER, D. – SPEITH, T. – HERMANNS, H. – KÄSTNER, L. – SCHMIDT, E. – SESING, A. – BAUM, K.: What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, vol. 296, 2021.
  4. RIBEIRO, M. T. – SINGH, S. – GUESTRIN, C.: “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, 2016.
  5. LUNDBERG, S. M. – LEE, S. I.: A Unified Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems, pp. 4766–4775, Dec. 2017.
  6. RIBEIRO, M. T. – SINGH, S. – GUESTRIN, C.: Anchors: High-Precision Model-Agnostic Explanations, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  7. FRIEDMAN, J. H.: Greedy function approximation: A gradient boosting machine. Annals of Statistics, vol. 29, no. 5, pp. 1189–1232, 2001.
  8. ROSENFELD, A.: Better Metrics for Evaluating Explainable Artificial Intelligence, Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 45–50, 2021.
  9. HOFFMAN, R. R. – MUELLER, S. T. – KLEIN, G. – LITMAN, J.: Metrics for Explainable AI: Challenges and Prospects. CoRR, 2018.
  10. CHROMIK, M. – SCHUESSLER, M.: A Taxonomy for Human Subject Evaluation of Black-Box Explanations in XAI. ExSS-ATEC@IUI, 2020.
  11. DIEBER, J. – KIRRANE, S.: Why model why? Assessing the strengths and limitations of LIME. ArXiv, 2020.
  12. AECHTNER, J. – CABRERA, L. – KATWAL, D. – ONGHENA, P. – VALENZUELA, D. P. – WILBIK, A.: Comparing User Perception of Explanations Developed with XAI Methods, IEEE International Conference on Fuzzy Systems, July 2022.
  13. DAUDT, F. – CINALLI, D. – GARCIA, A. C. B.: Research on Explainable Artificial Intelligence Techniques: An User Perspective, IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 144–149, 2021.
  14. WANG, X. – YIN, M.: Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons, ACM Transactions on Interactive Intelligent Systems, vol. 12, no. 4, 2022.
  15. SHIN, D.: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, vol. 146, 2021.
DOI: https://doi.org/10.2478/aei-2025-0005 | Journal eISSN: 1338-3957 | Journal ISSN: 1335-8243
Language: English
Page range: 3 - 9
Submitted on: Jun 28, 2024
Accepted on: Nov 8, 2024
Published on: Jun 4, 2025
Published by: Technical University of Košice
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Miroslava Matejová, Lucia Gojdičová, Ján Paralič, published by Technical University of Košice
This work is licensed under the Creative Commons Attribution 4.0 License.