In this paper, a novel method for detecting of laryngeal pathologies using deep neural networks and time–frequency signal processing techniques is presented. The proposed approach combines empirical mode decomposition (EMD) and wavelet analysis to extract discriminative features from healthy and pathological voice recordings obtained from the Saarbrücken Voice Database (SVD). Each voice signal is pre-processed and decomposed into intrinsic mode functions (IMFs), from which the most relevant IMF is selected based on a temporal energy criterion. Two sets of features are derived from the selected IMF: Mel-frequency cepstral coefficients (MFCCs) and continuous wavelet transform (CWT) coefficients. These features are converted into Mel-spectrogram and scalogram images, respectively, which serve as inputs to the AlexNet convolutional neural network (AlexNet-CNN) for automatic binary classification. To the best of our knowledge, this is the first study to incorporate scalogram representations with AlexNet-CNN in the context of pathological voice detection. The results show that the proposed method achieves a classification accuracy of 85.66 % when using Mel-spectrograms and 86.4 % when using scalograms, demonstrating its potential for effective and interpretable voice pathology screening.
© 2025 Sofiane Cherif, Abdelhafid Kaddour, Abdelmoudjib Benkada, Said Karoui, published by Slovak Academy of Sciences, Institute of Measurement Science
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.