Abstract
Alzheimer’s disease (AD) is an irreversible brain condition that impairs memory and cognitive processes. However, existing Alzheimer detection methods have shown low diagnostic accuracy (ACC) due to limited images and inefficient feature analysis. In this paper, a novel AD-HOLDER model is proposed for early recognition of AD using dual imaging methods: magnetic resonance imaging (MRI) and positron emission tomography (PET). The proposed AD-HOLDER model presents an integrated framework that uniquely combines Deep Image Prior (DIP) denoising, Histogram of Oriented Gradients (HOG) feature extraction, and Light Gradient Boosting Machine (LGBM) classification for AD detection from MRI and PET images. The HOG method aims to enhance the spatial and contextual representation of neurological patterns by combining structural features from MRI and statistical features from PET images. A classifier based on the LGBM processes the dual features to classify images as either normal or abnormal, effectively capturing complex patterns and improving classification ACC. The abnormal region is segmented using a graph-based segmentation (GBS) model to accurately detect affected areas for accurate detection of AD. The effectiveness of the proposed AD-HOLDER model is evaluated using ACC, specificity (SPE), precision (PRE), recall (REC), and F1-score (F1) based on the OASIS dataset. The proposed AD-HOLDER model achieves a classification ACC of 99.12% through machine learning. The proposed AD-HOLDER model increases overall ACC by 1.55 %, 25.44 %, and 3.14% compared to the Gradient Boosting Algorithm (GBA), Explainable Artificial Intelligence (XAI), and Computer-Aided Diagnosis (CAD) systems, respectively.