Have a personal or library account? Click to login
Interpretability Using Reconstruction of Capsule Networks Cover

Interpretability Using Reconstruction of Capsule Networks

Open Access
|Sep 2024

Abstract

This paper evaluates the effectiveness of different decoder architectures in enhancing the reconstruction quality of Capsule Neural Networks (CapsNets), which impacts model interpretability. We compared linear, convolutional, and residual decoders to assess their performance in improving CapsNet reconstructions. Our experiments revealed that the Conditional Variational Autoencoder Capsule Network (CVAECapOSR) achieved the best reconstruction quality on the CIFAR-10 dataset, while the residual decoder outperformed others on the Brain Tumor MRI dataset. These findings highlight how improved decoder architectures can generate reconstructions of better quality, which can enhance changes by deforming output capsules, thereby making the feature extraction and classification processes within CapsNets more transparent and interpretable. Additionally, we evaluated the computational efficiency and scalability of each decoder, providing insights into their practical deployment in real-world applications such as medical diagnostics and autonomous driving.

DOI: https://doi.org/10.2478/aei-2024-0010 | Journal eISSN: 1338-3957 | Journal ISSN: 1335-8243
Language: English
Page range: 15 - 22
Submitted on: May 23, 2024
Accepted on: Jul 15, 2024
Published on: Sep 19, 2024
Published by: Technical University of Košice
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Dominik Vranay, Mykhailo Ruzmetov, Peter Sinčák, published by Technical University of Košice
This work is licensed under the Creative Commons Attribution 4.0 License.