Alejandro Barredo Arrieta, Natalia Díaz Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fusion, 58:82–115, 2020.
Waddah Saeed and Christian Omlin. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Syst., 263:110273, 2023.
Dang Minh, H Xiang Wang, Y Fen Li, and Tan N Nguyen. Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev., pages 1–66, 2022.
Nataliya Shakhovska, Andrii Shebeko, and Yarema Prykarpatsky. A novel explainable AI model for medical data analysis. Journal of Artificial Intelligence and Soft Computing Research, 14(2):121–137, 2024.
Ivan Laktionov, Grygorii Diachenko, Danuta Rutkowska, and Marek Kisiel-Dorohinicki. An explainable AI approach to agrotechnical monitoring and crop diseases prediction in Dnipro region of Ukraine. Journal of Artificial Intelligence and Soft Computing Research, 13(4):247–272, 2023.
Guang Yang, Qinghao Ye, and Jun Xia. Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion, 77:29–52, 2022.
Roel Henckaerts, Katrien Antonio, and Marie Pier Côté. When stakes are high: Balancing accuracy and transparency with model-agnostic interpretable data-driven surrogates. Expert Syst. Appl., 202:117230, 2022.
Bas HM Van der Velden, Hugo J Kuijf, Kenneth GA Gilhuijs, and Max A Viergever. Explainable artificial intelligence (xai) in deep learning-based medical image analysis. Med. Image Anal., 79:102470, 2022.
Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, et al. Explainable ai (xai): Core ideas, techniques, and solutions. ACM Comput. Surv., 55(9):1–33, 2023.
Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, et al. Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 106:102301, 2024.
Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, and Jugal Kalita. Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction. Neurocomputing, page 128111, 2024.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “ why should i trust you?” explaining the predictions of any classifier. In Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pages 1135–1144, 2016.
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. IEEE Int. Conf. Comput. Vision, pages 618–626, 2017.
Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In BMVC - Br. Mach. Vis. Conf. Proc., 2018.
Aaron Fisher, Cynthia Rudin, and Francesca Dominici. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res., 20(177):1–81, 2019.
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, pages 2921–2929, 2016.
Yao Wang, Chongzhi Zu, Kexin Fu, and Hanchuan Peng. Shapley values for feature selection: The good, the bad, and the axioms. IEEE Trans. Pattern Anal. Mach. Intell., 45(4):4918–4932, 2023.
Muhammad Rehman Zafar and Naimul Khan. Deterministic local interpretable model-agnostic explanations for stable explainability. Mach. Learn. Knowl. Extr., 3(3):525–541, 2021.
Sheng Shi, Yangzhou Du, and Wei Fan. Kernel-based lime with feature dependency sampling. In Proc. Int. Conf. Pattern Recognit., pages 9143–9148. IEEE, 2021.
Qingyao Ai and Lakshmi Narayanan. R. Model-agnostic vs. model-intrinsic interpretability for explainable product search. In Proc. Int. Conf. Inf. Knowledge Manage, pages 5–15, 2021.
Abiodun M Ikotun, Absalom E Ezugwu, Laith Abualigah, Belal Abuhaija, and Jia Heming. K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data. Inf. Sci., 622:178–210, 2023.
Heinrich Jiang, Jennifer Jang, and Samory Kpotufe. Quickshift++: Provably good initializations for sample-based mean shift. In Proc. Int. Conf. Mach. Learn., pages 2294–2303. PMLR, 2018.
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. Int. Conf. Pattern Recognit., pages 8188–8197, 2020.
Teng Li, Amin Rezaeipanah, and ElSayed M Tag El Din. An ensemble agglomerative hierarchical clustering algorithm based on clusters clustering technique and the novel similarity measurement. J. King Saud Univ. Comput. Inf. Sci., 34(6):3828–3842, 2022.
Xin Han, Ye Zhu, Kai Ming Ting, and Gang Li. The impact of isolation kernel on agglomerative hierarchical clustering algorithms. Pattern Recognit., 139:109517, 2023.
Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8):832, 2019.
Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proc. IEEE Int. Conf. Comput. Vision, pages 3429–3437, 2017.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, pages 1–9, 2015.