niobures's picture
EmoV-DB
3636e5d verified

NEU Emotional Speech Corpus

  • This dataset is built for the purpose of emotional speech synthesis. The transcript were based on the CMU arctic database: http://www.festvox.org/cmu_arctic/cmuarctic.data.

  • It includes recordings for four speakers- two males and two females.

  • The emotional styles are neutral, sleepiness, anger, disgust and amused.

  • Each audio file is recorded in 16bits .wav format

  • Spk-Je (Female, English: Neutral(417 files), Amused(222 files), Angry(523 files), Sleepy(466 files), Disgust(189 files))

  • Spk-Bea (Female, English: Neutral(373 files), Amused(309 files), Angry(317 files), Sleepy(520 files), Disgust(347 files))

  • Spk-Sa (Male, English: Neutral(493 files), Amused(501 files), Angry(468 files), Sleepy(495 files), Disgust(497 files))

  • Spk-Jsh (Male, English: Neutral(302 files), Amused(298 files), Sleepy(263 files))

  • File naming (audio_folder): anger_1-28_0011.wav - 1) first word (emotion style), 1-28 - annotation doc file range, Last four digit is the sentence number.

  • File naming (annotation_folder): anger_1-28.TextGrid - 1) first word (emotional style), 1-28- annotation doc range

Please reference the paper below when using this database:

A. Adigwe, N. Tits, K. El Haddad, S. Ostadabbas, T. Dutoit, “The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation Systems,” 2018. (arXiv preprint:1806.09514)

For further inquiry please contact:

Sarah Ostadabbas, PhD Electrical & Computer Engineering Department Northeastern University, Boston, MA 02115 Office Phone: 617-373-4992 email: ostadabbas@ece.neu.edu Augmented Cognition Lab (ACLab) Webpage: http://www.northeastern.edu/ostadabbas/