Have a personal or library account? Click to login
Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations Cover

Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations

Open Access
|Mar 2025

References

  1. Alejandro Barredo Arrieta, Natalia Díaz Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fusion, 58:82–115, 2020.
  2. Waddah Saeed and Christian Omlin. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Syst., 263:110273, 2023.
  3. Dang Minh, H Xiang Wang, Y Fen Li, and Tan N Nguyen. Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev., pages 1–66, 2022.
  4. Nataliya Shakhovska, Andrii Shebeko, and Yarema Prykarpatsky. A novel explainable AI model for medical data analysis. Journal of Artificial Intelligence and Soft Computing Research, 14(2):121–137, 2024.
  5. Ivan Laktionov, Grygorii Diachenko, Danuta Rutkowska, and Marek Kisiel-Dorohinicki. An explainable AI approach to agrotechnical monitoring and crop diseases prediction in Dnipro region of Ukraine. Journal of Artificial Intelligence and Soft Computing Research, 13(4):247–272, 2023.
  6. Guang Yang, Qinghao Ye, and Jun Xia. Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion, 77:29–52, 2022.
  7. Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Trans. Neural Networks Learn. Syst., 32(11):4793–4813, 2020.
  8. Roel Henckaerts, Katrien Antonio, and Marie Pier Côté. When stakes are high: Balancing accuracy and transparency with model-agnostic interpretable data-driven surrogates. Expert Syst. Appl., 202:117230, 2022.
  9. Bas HM Van der Velden, Hugo J Kuijf, Kenneth GA Gilhuijs, and Max A Viergever. Explainable artificial intelligence (xai) in deep learning-based medical image analysis. Med. Image Anal., 79:102470, 2022.
  10. Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, et al. Explainable ai (xai): Core ideas, techniques, and solutions. ACM Comput. Surv., 55(9):1–33, 2023.
  11. Vikas Hassija, Vinay Chamola, Atmesh Mahapatra, Abhinandan Singal, Divyansh Goel, Kaizhu Huang, Simone Scardapane, Indro Spinelli, Mufti Mahmud, and Amir Hussain. Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16(1):45–74, 2024.
  12. Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, et al. Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 106:102301, 2024.
  13. Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, and Jugal Kalita. Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction. Neurocomputing, page 128111, 2024.
  14. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “ why should i trust you?” explaining the predictions of any classifier. In Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pages 1135–1144, 2016.
  15. Scott M Lundberg and Su In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
  16. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. IEEE Int. Conf. Comput. Vision, pages 618–626, 2017.
  17. Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In BMVC - Br. Mach. Vis. Conf. Proc., 2018.
  18. Aaron Fisher, Cynthia Rudin, and Francesca Dominici. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res., 20(177):1–81, 2019.
  19. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, pages 2921–2929, 2016.
  20. Yao Wang, Chongzhi Zu, Kexin Fu, and Hanchuan Peng. Shapley values for feature selection: The good, the bad, and the axioms. IEEE Trans. Pattern Anal. Mach. Intell., 45(4):4918–4932, 2023.
  21. Muhammad Rehman Zafar and Naimul Khan. Deterministic local interpretable model-agnostic explanations for stable explainability. Mach. Learn. Knowl. Extr., 3(3):525–541, 2021.
  22. Sheng Shi, Yangzhou Du, and Wei Fan. Kernel-based lime with feature dependency sampling. In Proc. Int. Conf. Pattern Recognit., pages 9143–9148. IEEE, 2021.
  23. Qingyao Ai and Lakshmi Narayanan. R. Model-agnostic vs. model-intrinsic interpretability for explainable product search. In Proc. Int. Conf. Inf. Knowledge Manage, pages 5–15, 2021.
  24. Abiodun M Ikotun, Absalom E Ezugwu, Laith Abualigah, Belal Abuhaija, and Jia Heming. K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data. Inf. Sci., 622:178–210, 2023.
  25. Kristina P Sinaga and Miin Shen Yang. Unsupervised k-means clustering algorithm. IEEE Access, 8:80716–80727, 2020.
  26. Taher M Ghazal. Performances of k-means clustering algorithm with different distance metrics. Intell. Autom. Soft Comput., 30(2):735–742, 2021.
  27. Heinrich Jiang, Jennifer Jang, and Samory Kpotufe. Quickshift++: Provably good initializations for sample-based mean shift. In Proc. Int. Conf. Mach. Learn., pages 2294–2303. PMLR, 2018.
  28. Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. Int. Conf. Pattern Recognit., pages 8188–8197, 2020.
  29. Teng Li, Amin Rezaeipanah, and ElSayed M Tag El Din. An ensemble agglomerative hierarchical clustering algorithm based on clusters clustering technique and the novel similarity measurement. J. King Saud Univ. Comput. Inf. Sci., 34(6):3828–3842, 2022.
  30. Xin Han, Ye Zhu, Kai Ming Ting, and Gang Li. The impact of isolation kernel on agglomerative hierarchical clustering algorithms. Pattern Recognit., 139:109517, 2023.
  31. Sven Kosub. A note on the triangle inequality for the jaccard distance. Pattern Recognit. Lett., 120:36–38, 2019.
  32. Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8):832, 2019.
  33. Irving Biederman. Recognition-by-components: a theory of human image understanding. Psychological review, 94(2):115, 1987.
  34. Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proc. IEEE Int. Conf. Comput. Vision, pages 3429–3437, 2017.
  35. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, pages 1–9, 2015.
  36. Feng Chen, Jiangshu Wei, Bing Xue, and Mengjie Zhang. Feature fusion and kernel selective in inception-v4 network. Appl. Soft Comput., 119:108582, 2022.
Language: English
Page range: 257 - 277
Submitted on: Oct 3, 2024
Accepted on: Jan 14, 2025
Published on: Mar 18, 2025
Published by: SAN University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 times per year

© 2025 Junyan Qian, Tong Wen, Ming Ling, Xiaofu Du, Hao Ding, published by SAN University
This work is licensed under the Creative Commons Attribution 4.0 License.