References
- V. Velardo, “https://github.com/musikalkemist/AudioSignalProcessingForML,” 10 10 2020. [Online]. Available: https://github.com/musikalkemist/AudioSignalProcessingForML. [Accessed 27 11 2023].
- J. P. Bello, L. Daudet, S. Abdallah, C. Duxbury, M. Davies and M. B. Sandler, “A tutorial on onset detection in music signals,” IEEE Transactions on Speech and Audio Processing, pp. 1035-1047, 2005.
- G. T. Vallet, D. I. Shore and M. Schutz, “Exploring the role of the amplitude envelope in duration estimation,” Perception, vol. 43, no. 7, pp. 616-630, 2014.
- . L. Chuen and M. Schutz, “The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues,” Attention, Perception, and Psychophysics, pp. 1512-1528, 2016.
- M. Schutz, J. Stefanucci, S. Baum and A. Roth, “Name that percussive tune: Associative memory and amplitude envelope,” Quarterly Journal of Experimental Psychology, pp. 1323-1343, 2017.
- S. Sreetharan, J. Schlesinger and M. Schutz, “Decaying amplitude envelopes reduce alarm annoyance: Exploring new approaches to improving auditory interfaces,” Applied Ergonomics, 2021.
- Y. Jézéquel, L. Chauvaud and J. Bonnel, “Spiny lobster sounds can be detectable over kilometres underwater,” Sci Rep 10, 2020.
- C. Brown, J. Chauhan, A. Grammenos, J. Han, A. Hasthanasombat, D. Spathis, T. Xia, P. Cicuta and C. Mascolo, “Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data,” Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ‘20), pp. 3474-3484, 2020.
- G. Sharma, K. Umapathy and S. Krishnan, “Trends in audio signal feature extraction methods,” Applied Acoustics, vol. 158, 2020.
- Y. A. Ibrahim, J. C. Odiketa and T. S. Ibiyemi, “Preprocessing technique in automatic speech recognition for human computer interaction: an overview.,” Ann Comput Sci Ser, vol. 15, no. 1, pp. 186-191, 2017.
- S. Chu, S. Narayanan and C.-C. J. Kuo, “Environmental Sound Recognition With Time–Frequency Audio Features,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1142-1158, 2009.
- S. Sivasankaran and K. Prabhu, “Robust features for environmental sound classification,” IEEE International Conference on Electronics, Computing and Communication Technologies, pp. 1-6, 2013.
- F. Alías, . J. C. Socoró and X. Sevillano, “A Review of Physical and Perceptual Feature Extraction Techniques for Speech, Music and Environmental Sounds,” Applied Sciences, vol. 6, no. 5, 2016.
- R. Islam, E. Abdel-Raheem and M. Tarique, “A study of using cough sounds and deep neural networks for the early detection of Covid-19,” Biomedical Engineering Advances, vol. 3, 2022.
- A. Hassan, I. Shahin and M. B. Alsabek, “COVID-19 Detection System using Recurrent Neural Networks,” International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI), pp. 1-5, 2020.
- A. B S, S. R. Shetty, S. Srinivas, V. Mantri and V. R. B. Prasad, “Intoxication Detection using Audio,” 2023 IEEE 8th International Conference for Convergence in Technology (I2CT), pp. 1-6, 2023.
- H. Purohit, R. Tanabe, K. Ichige, T. Endo, Y. Nikaido, K. Suefusa and Y. Kawaguchi, “MIMII Dataset: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection,” in Proc. 4th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 2009.
- R. Cretulescu and D. Morariu, Tehnici de clasificare si clustering al documentelor, Cluj Napoca: Editura Albastra, 2012.
- F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot and É. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011.