Have a personal or library account? Click to login
Comparison of speaker dependent and speaker independent emotion recognition Cover

Comparison of speaker dependent and speaker independent emotion recognition

By: Jan Rybka and  Artur Janicki  
Open Access
|Dec 2013

Abstract

This paper describes a study of emotion recognition based on speech analysis. The introduction to the theory contains a review of emotion inventories used in various studies of emotion recognition as well as the speech corpora applied, methods of speech parametrization, and the most commonly employed classification algorithms. In the current study the EMO-DB speech corpus and three selected classifiers, the k-Nearest Neighbor (k-NN), the Artificial Neural Network (ANN) and Support Vector Machines (SVMs), were used in experiments. SVMs turned out to provide the best classification accuracy of 75.44% in the speaker dependent mode, that is, when speech samples from the same speaker were included in the training corpus. Various speaker dependent and speaker independent configurations were analyzed and compared. Emotion recognition in speaker dependent conditions usually yielded higher accuracy results than a similar but speaker independent configuration. The improvement was especially well observed if the base recognition ratio of a given speaker was low. Happiness and anger, as well as boredom and neutrality, proved to be the pairs of emotions most often confused.

DOI: https://doi.org/10.2478/amcs-2013-0060 | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 797 - 808
Published on: Dec 31, 2013
Published by: University of Zielona Góra
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2013 Jan Rybka, Artur Janicki, published by University of Zielona Góra
This work is licensed under the Creative Commons License.

Volume 23 (2013): Issue 4 (December 2013)