Journal of Computer Science

VERBO: Voice Emotion Recognition dataBase in Portuguese Language

José R. Torres Neto, Geraldo P.R. Filho, Leandro Y. Mano and Jó Ueyama

DOI : 10.3844/jcssp.2018.1420.1430

Journal of Computer Science

Volume 14, Issue 11

Pages 1420-1430

Abstract

The recognition of human emotional traits based on Affective Computing is being carried out by computational systems that are able to interpret and react intelligently to the context of the user. Speech Emotion Recognition systems are capable of transforming speech signal data into information related to the feelings of individuals in specific situations. However, the emotional expression of a human being depends mainly on his origins. For this reason, emotional voice databases are peculiar to each language. In this paper, we propose a new emotional database with speech in the Portuguese language of Brazil, called Voice Emotion Recognition dataBase in Portuguese language (VERBO). The database was validated by a panel of expert judges and we achieved an agreement rate of 76% using the content validity index and substantial agreement rate of 65% using Fleiss’ Kappa. In addition, an accuracy of 0.76 was achieved and it was possible to observe that the emotions anger and happiness were more easy to recognize showing 0.85 and 0.83 of f1-score, respectively, whereas the disgust and surprise emotions were the most difficult showing 0.67 and 0.68, respectively. In view of this, the main contributions to research made by this study are: (1) The establishment of a new actuated voice database; (2) support provided by voice recognition systems for the analysis of feelings and emotions; and (3) statistical validation of the database using CVI and Fleiss kappa.

Copyright

© 2018 José R. Torres Neto, Geraldo P.R. Filho, Leandro Y. Mano and Jó Ueyama. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.