| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:10:54.677687Z" |
| }, |
| "title": "Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition", |
| "authors": [ |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Keesing", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Auckland New Zealand", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Watson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Auckland New Zealand", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Witbrock", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Auckland New Zealand", |
| "location": {} |
| }, |
| "email": "m.witbrock@auckland.ac.nz" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Speech emotion recognition (SER) is the analysis of speech to predict the emotional state of the speaker, for which there are many current and potential applications (Peter and Beale, 2008; Koolagudi and Rao, 2012) . As speech-enabled devices become more prevalent, the need for reliable and robust SER increases, and also the need for comparability of results on common datasets. While there has been a large amount of research in this field, a lot of results come from testing only on one or two datasets, which may or may not be publicly available. Additionally, different methodologies are often used, reducing direct comparability of results. Given the wide variety of neural architectures and testing methodologies, there is need for a common testing framework to help comparisons.", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 189, |
| "text": "(Peter and Beale, 2008;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 190, |
| "end": 214, |
| "text": "Koolagudi and Rao, 2012)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This study aims to test some SER models proposed in the literature on a discrete emotion classification task, and promote reproducibility of results by using public and academic licensed datasets. In addition, the code is publicly hosted on GitHub 1 under an open source license, so that our results may be verified and built upon. Our work has two 1 https://github.com/Broad-AI-Lab/ emotion main benefits. First, it serves as a baseline reference for future research that uses datasets present in this study. Second, it allows for comparisons between datasets to see which of their properties may influence classification performance of different models.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 249, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper is structured as follows. In Section 2 related work is given, and in Section 3 we list the datasets used in this study. The tested methods are outlined in Section 4, and the results given in Section 5. We briefly discuss these results in Section 6 and a conclude in Section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There has been some previous work in comparing SER techniques on a number of datasets. In Schuller et al. (2009a) , Schuller et al. compare a hidden Markov model/Gaussian mixture model (HMM/GMM) and a SVM classifier for emotion class, arousal and valence prediction on nine datasets. For HMM/GMM, 12 MFCC, log-frameenergy, speed and acceleration features, are extracted per frame. For SVM, 6552 features are extracted based on 39 statistical functionals of 56 low-level descriptors (LLDs). Testing was done in a leave-one-speaker-out (LOSO) or leaveone-speaker-group-out (LOSGO) cross-validation setup. The only three datasets in common with the present study are EMO-DB, eNTERFACE and SmartKom, for which unweighted average recall (UAR) of 84.6%, 72.5%, and 23.5% were achieved, respectively. We use a similar methodology in the present paper.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 113, |
| "text": "Schuller et al. (2009a)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The Schuller et al. work is expanded in Stuhlsatz et al. (2011) , where multi-layer stacks of restricted Boltzmann machines (RBMs) are pre-trained in an unsupervised manner, then fine-tuned using backpropagation as a feed-forward neural network. as in Schuller et al. (2009a) , but the all-class emotion classification results are better on only some of the datasets. In particular, GerDA performs slightly better on average for SmartKom, but slightly worse for EMO-DB and eNTERFACE. In the current work, we compare many more methods on many more datasets; we also include more recent datasets.", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 63, |
| "text": "Stuhlsatz et al. (2011)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 252, |
| "end": 275, |
| "text": "Schuller et al. (2009a)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Fifteen datasets are used in this study, some of which are open datasets, while others require a signed EULA to access. All of the datasets have a set of categorical emotional labels. A question arises when using acted datasets with additional annotations, such as CREMA-D, as to whether to use the actor's intended emotion as 'ground truth' for training a classifier or instead use a consensus of annotators with majority vote. For MSP-IMPROV and IEMOCAP, the label assigned by annotators is used, consistent with previous work. For CREMA-D we have opted to use the actors intended emotion, rather than any annotator assigned labels. A table describing the emotion distribution in each dataset is given in Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 707, |
| "end": 714, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Open datasets are those under a free and permissive license, and are able to be downloaded with request- ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Open Datasets", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Licensed datasets are those that require signing an academic or other license in order to gain access to the data. The licensed datasets used in this study are: Database of Elicited Mood in Speech (DEMoS) (Parada-Cabaleiro et al., 2019), EmoFilm (Parada-Cabaleiro et al., 2018), Interactive Emotional Dyadic Motion Capture (IEMOCAP) (Busso et al., 2008) , MSP-IMPROV (Busso et al., 2017) , Surrey Audio-Visual Expressed Emotion (SAVEE) database (Haq et al., 2008) , and the SmartKom corpus, public set (Schiel et al., 2002) . We implement the final model from Aldeneh and Mower Provost (2017) . This consists of four independent 1D convolutions, followed by maxpooling. The resulting vectors are concatenated into a feature vector which is passed to two fully-connected layers. The Aldeneh model takes a 2D sequence of log Mel-frequency spectrograms as input.", |
| "cite_spans": [ |
| { |
| "start": 333, |
| "end": 353, |
| "text": "(Busso et al., 2008)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 367, |
| "end": 387, |
| "text": "(Busso et al., 2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 445, |
| "end": 463, |
| "text": "(Haq et al., 2008)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 502, |
| "end": 523, |
| "text": "(Schiel et al., 2002)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 560, |
| "end": 592, |
| "text": "Aldeneh and Mower Provost (2017)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Licensed Datasets", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The model from Latif et al. (2019) consists of 3 independent 1D convolutions of with batch normalisation and maxpooling. The filters are concatenated feature-wise and a 2D convolution is performed, again with batch normalisation and maxpooling. The final 1920-dimensional feature sequence is passed through a LSTM block, followed by 30% dropout and a fully-connected layer. The Latif model takes 1D raw audio as input.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 34, |
| "text": "Latif et al. (2019)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Licensed Datasets", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The model from Zhao et al. (2019) consists of a convolutional branch and a recurrent branch that act on 2D spectrograms. The recurrent branch consists of a bidirectional LSTM with a single layer, whereas in the paper they used two layers. The convolutional branch consists of three sequential 2D convolutions, with batch normalisation, max-pooling and dropout. The filters and kernel sizes are different across convolutions and the resulting time-frequency axes are flattened and passed through a dense layer. The convolutional and recurrent branches are individually pooled using weighted attention pooling, concatenated and finally passed through a dense layer.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 33, |
| "text": "Zhao et al. (2019)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Licensed Datasets", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The model proposed in acts on a raw audio waveform. The audio is framed with a frame size of 640 samples and shift of 160 samples. Two 1D convolutions with maxpooling are calculated along the time dimension. The features are then pooled in the feature dimension and flattened to a 1280-dimensional vector per frame. The sequences are fed into a 2-layer GRU, before weighted attention pooling, as in the Zhao model. Although this model was originally designed to perform multi-task discrete valence and arousal classification, we apply it to the single-task emotion label classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Licensed Datasets", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We perform leave-one-speaker-out (LOSO) or leave-one-speaker-group-out (LOSGO) crossvalidation for all tests. Before testing, we perform per-speaker standardisation of feature columns, as in (Schuller et al., 2009a) . If a dataset has more than 12 speakers, then 6 random speaker groups are chosen for cross-validation. For IEMOCAP and MSP-IMPROV, each session defines a speaker group. All models are trained for 50 epochs with the Adam optimiser (Kingma and Ba, 2017) and a learning rate of 0.0001. The batch size used for the Aldeneh and Latif models was 32, for the Zhao model was 64, and for the Zhang model was 16. Each was trained using sample weights inversely proportional to the respective class sizes, so the each class had equal total weight. The sample weights were used to scale the cross-entropy loss. The metric reported is 'unweighted average recall' (UAR), which is simply the mean of the per-class recall scores. This incorporates all classes equally even if there is a large class bias, and minimises the effect of class distribution on the reported accuracy, so that models can't simply optimise for the majority class. Each test is repeated 3 times and averaged, except for the Zhang model, which was only tested once, because it took too long to train.", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 215, |
| "text": "(Schuller et al., 2009a)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-validation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "All models were implemented in Python using the TensorFlow 2 Keras API. Testing was run on a machine with 64GB of RAM, an AMD Ryzen 3900X CPU, and two NVIDIA GeForce RTX 2080 Super GPUs, each with 8GB of VRAM. Each training run used only one GPU, however. For the and Latif et al. (2019) models we use the raw time domain signals. These are clipped to a maximum length of 80,000 samples (5 seconds at 16,000 kHz), but not padded, unlike the fixed spectrograms. For the Zhao et al. (2019) model we input a 5 second log-mel spectrogram with 40 mel bands calculated using a frame size of 25ms and frame shift of 10ms. Audio is clipped/padded to exactly 5 seconds. For the Aldeneh and Mower Provost (2017) model we test three different inputs: a 5 second 240 mel band spectrogram, 240 log-mel bands without clipping/padding, and 40 log-mel bands without clipping/padding. The log-mel bands are variable length sequences and are length-padded to the nearest larger multiple of 64, before batching. This way the models train with different sequence lengths.", |
| "cite_spans": [ |
| { |
| "start": 268, |
| "end": 287, |
| "text": "Latif et al. (2019)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 469, |
| "end": 487, |
| "text": "Zhao et al. (2019)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 669, |
| "end": 701, |
| "text": "Aldeneh and Mower Provost (2017)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-validation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A table of results is given in Table 3 below. All combinations of dataset and model+features were tested. For comparison, we also report on the performance of the 'IS09' standard feature set introduced in the first INTERSPEECH emotion competition (Schuller et al., 2009b) . For this we use a support vector machine (SVM) with radial basis function (RBF) kernel, with SVM parameter C and kernel parameter \u03b3 optimised using LOS(G)O cross-validation. We also report human accuracy where it has either been mentioned in the corresponding citation, or can be calculated from multiple label annotations provided with the dataset.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 271, |
| "text": "(Schuller et al., 2009b)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 38, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "From the results we see that the models using raw audio as input tend to perform worse than those using spectrogram input. There are also cases, such as on the Portuguese dataset, where the Zhang model performs the best of the four, and such as on the JL corpus, where the raw audio models are better than the fixed-size spectrogram models but worse than the variable length log-mel models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "There are many possible reasons for this, and due to time constraints, more thorough investigation was not able to be done. One reason is likely the lack of hyperparameter tuning. Hyperparameters like number of training epochs, learning rate, batch size, and model specific hyperparameters such as the number of convolution kernels or number of LSTM units, can have a moderate effect on the performance of each model. These would need to be optimised per-dataset using cross-validation, before testing. Another possible reason is the tendency for models to overfit. We found that the raw audio models were overfitting quite badly and achieving worse performance on the test set as a result, even though they have a moderate number of parameters. Regularisation techniques can help with this, such as dropout and regularisation loss, along with batch normalisation. Finally, while we tried to make our models as similar as possible to the original papers, there are likely implementation differences that negatively influence the performance of our models. The design of the Zhang model was for discrete arousal/valence prediction, and it is likely that a slightly modified architecture would better suit categorical emotion prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The other models were also tested with slightly different methodologies from ours, which would influence difference in reported results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We also see a dependence on both dataset and features used. The Aldeneh model with 240 logmels tended to be better than with only 40 logmels, but also better than a fixed size 240 mel-band spectrogram, but this was dependent on dataset. It's possible that the zero-padding and -60dB clipping of the spectrograms negatively impacted the performance. The Zhao model performs best out of the four on the SmartKom dataset, achieving a UAR better than chance level, but still worse than the SVM with IS09 features. It's possible that in this instance the separate LSTM and convolutional branches have a greater effect. Unfortunately we were not able to test all combinations of spectrogram features with the Zhao model. In future we aim to complete this, as well as compare using spectrograms with different frame size and clipping parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, the time taken to train these models is quite long due to using full cross-validation. An argument can be made for predefined training/validation/test sets of larger datasets, but these are often created ad hoc and can vary between studies, so collective agreement would be needed for using these as a common standard. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper we have presented an evaluation of different neural network models proposed for emotion recognition, and compared their performance for discrete emotion classification on 15 publicly available and academic datasets. We used a consistent methodology across all datasets, and have kept hyperparameters very similar across the proposed models. The results show differences in the performance of the models which sometimes depends on the evaluated dataset. We also showed that the models requiring raw audio input tended to perform worse than the ones requiring spectrogram input, however more testing is required, with hyperparameter tuning and regularisation techniques, to determine the cause of this performance difference. In general, our work serves as a baseline for comparison for future research. In future, we aim to additionally test models using utterance level features as input, and compare with non-neural network models such as SVM and random forests. We also aim to test feature generation methods such as bag-of-audio-words and unsupervised representation learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "https://www.tensorflow.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank the University of Auckland for funding this research through a PhD scholarship. We would like to thank in particular the School of Computer Science for providing the computer hardware to train and test these models. We would also like to thank the anonymous reviewers who submitted helpful feedback on this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Using regional saliency for speech emotion recognition", |
| "authors": [ |
| { |
| "first": "Zakaria", |
| "middle": [], |
| "last": "Aldeneh", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [ |
| "Mower" |
| ], |
| "last": "Provost", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "2379--190", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ICASSP.2017.7952655" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zakaria Aldeneh and Emily Mower Provost. 2017. Us- ing regional saliency for speech emotion recognition. In 2017 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 2741-2745. ISSN: 2379-190X.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A database of German emotional speech", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Burkhardt", |
| "suffix": "" |
| }, |
| { |
| "first": "Astrid", |
| "middle": [], |
| "last": "Paeschke", |
| "suffix": "" |
| }, |
| { |
| "first": "Miriam", |
| "middle": [], |
| "last": "Rolfes", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [ |
| "F" |
| ], |
| "last": "Sendlmeier", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Interspeech 2005 -Eurospeech", |
| "volume": "", |
| "issue": "", |
| "pages": "1517--1520", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Burkhardt, Astrid Paeschke, Miriam Rolfes, Wal- ter F. Sendlmeier, and Benjamin Weiss. 2005. A database of German emotional speech. In Inter- speech 2005 -Eurospeech, pages 1517-1520, Lis- bon, Portugal.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Busso", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Parthasarathy", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Burmania", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Abdel-Wahab", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Sadoughi", |
| "suffix": "" |
| }, |
| { |
| "first": "E. Mower", |
| "middle": [], |
| "last": "Provost", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IEEE Transactions on Affective Computing", |
| "volume": "8", |
| "issue": "1", |
| "pages": "67--80", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/TAFFC.2016.2515617" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Busso, S. Parthasarathy, A. Burmania, M. Abdel- Wahab, N. Sadoughi, and E. Mower Provost. 2017. MSP-IMPROV: An Acted Corpus of Dyadic Inter- actions to Study Emotion Perception. IEEE Trans- actions on Affective Computing, 8(1):67-80.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "IEMOCAP: interactive emotional dyadic motion capture database. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Busso", |
| "suffix": "" |
| }, |
| { |
| "first": "Murtaza", |
| "middle": [], |
| "last": "Bulut", |
| "suffix": "" |
| }, |
| { |
| "first": "Chi-Chun", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Abe", |
| "middle": [], |
| "last": "Kazemzadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Mower", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeannette", |
| "middle": [ |
| "N" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungbok", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Shrikanth", |
| "middle": [ |
| "S" |
| ], |
| "last": "Narayanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "42", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-008-9076-6" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N. Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. IEMOCAP: interactive emotional dyadic motion capture database. Language Re- sources and Evaluation, 42(4):335.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "G" |
| ], |
| "last": "Cooper", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "K" |
| ], |
| "last": "Keutmann", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "C" |
| ], |
| "last": "Gur", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Verma", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "IEEE Transactions on Affective Computing", |
| "volume": "5", |
| "issue": "4", |
| "pages": "377--390", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/TAFFC.2014.2336244" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Cao, D. G. Cooper, M. K. Keutmann, R. C. Gur, A. Nenkova, and R. Verma. 2014. CREMA- D: Crowd-Sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5(4):377-390.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Recognition of emotional speech for younger and older talkers: Behavioural findings from the toronto emotional speech set", |
| "authors": [ |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Dupuis", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [], |
| "last": "Pichora-Fuller", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Canadian Acoustics", |
| "volume": "39", |
| "issue": "3", |
| "pages": "182--183", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kate Dupuis and M. Kathleen Pichora-Fuller. 2011. Recognition of emotional speech for younger and older talkers: Behavioural findings from the toronto emotional speech set. Canadian Acoustics, 39(3):182-183.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A canadian french emotional speech dataset", |
| "authors": [ |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Gournay", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Lahaie", |
| "suffix": "" |
| }, |
| { |
| "first": "Roch", |
| "middle": [], |
| "last": "Lefebvre", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 9th ACM Multimedia Systems Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "399--402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philippe Gournay, Olivier Lahaie, and Roch Lefebvre. 2018. A canadian french emotional speech dataset. In Proceedings of the 9th ACM Multimedia Systems Conference, pages 399-402. ACM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Audio-visual feature selection and reduction for emotion classification", |
| "authors": [ |
| { |
| "first": "Sanaul", |
| "middle": [], |
| "last": "Haq", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "B" |
| ], |
| "last": "Philip", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Jackson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Edge", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "International Conference on Auditory-Visual Speech Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "185--190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanaul Haq, Philip JB Jackson, and James Edge. 2008. Audio-visual feature selection and reduction for emotion classification. In International Conference on Auditory-Visual Speech Processing 2008, pages 185-190, Tangalooma Wild Dolphin Resort, More- ton Island, Queensland, Australia.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An Open Source Emotional Speech Corpus for Human Robot Interaction Applications", |
| "authors": [ |
| { |
| "first": "Jesin", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [ |
| "Inez" |
| ], |
| "last": "Watson", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "2768--2772", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jesin James, Li Tian, and Catherine Inez Watson. 2018. An Open Source Emotional Speech Corpus for Hu- man Robot Interaction Applications. In Interspeech, pages 2768-2772.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Adam: A Method for Stochastic Optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980[cs].ArXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs]. ArXiv: 1412.6980.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Emotion recognition from speech: a review", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Shashidhar", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Koolagudi", |
| "suffix": "" |
| }, |
| { |
| "first": "Sreenivasa", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "International Journal of Speech Technology", |
| "volume": "15", |
| "issue": "2", |
| "pages": "99--117", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10772-011-9125-1" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shashidhar G. Koolagudi and K. Sreenivasa Rao. 2012. Emotion recognition from speech: a review. Inter- national Journal of Speech Technology, 15(2):99- 117.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Direct Modelling of Speech Emotion from Raw Speech", |
| "authors": [ |
| { |
| "first": "Siddique", |
| "middle": [], |
| "last": "Latif", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajib", |
| "middle": [], |
| "last": "Rana", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Khalifa", |
| "suffix": "" |
| }, |
| { |
| "first": "Raja", |
| "middle": [], |
| "last": "Jurdak", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Epps", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1904.03833" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, and Julien Epps. 2019. Direct Modelling of Speech Emotion from Raw Speech. arXiv:1904.03833 [cs, eess]. ArXiv: 1904.03833.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [ |
| "A" |
| ], |
| "last": "Livingstone", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Russo", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "PLOS ONE", |
| "volume": "13", |
| "issue": "5", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1371/journal.pone.0196391" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven R. Livingstone and Frank A. Russo. 2018. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multi- modal set of facial and vocal expressions in North American English. PLOS ONE, 13(5):e0196391.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The eNTERFACE' 05 Audio-Visual Emotion Database", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Kotsia", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Macq", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Pitas", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "22nd International Conference on Data Engineering Workshops", |
| "volume": "", |
| "issue": "", |
| "pages": "8--8", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ICDEW.2006.145" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Martin, I. Kotsia, B. Macq, and I. Pitas. 2006. The eNTERFACE' 05 Audio-Visual Emotion Database. In 22nd International Conference on Data Engineer- ing Workshops, pages 8-8.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "ShEMO: a large-scale validated database for Persian speech emotion detection. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Paria Jamshid", |
| "middle": [], |
| "last": "Omid Mohamad Nezami", |
| "suffix": "" |
| }, |
| { |
| "first": "Mansoureh", |
| "middle": [], |
| "last": "Lou", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Karami", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "53", |
| "issue": "", |
| "pages": "1--16", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-018-9427-x" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omid Mohamad Nezami, Paria Jamshid Lou, and Man- soureh Karami. 2019. ShEMO: a large-scale vali- dated database for Persian speech emotion detection. Language Resources and Evaluation, 53(1):1-16.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Categorical vs Dimensional Perception of Italian Emotional Speech", |
| "authors": [ |
| { |
| "first": "Emilia", |
| "middle": [], |
| "last": "Parada-Cabaleiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Costantini", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Batliner", |
| "suffix": "" |
| }, |
| { |
| "first": "Alice", |
| "middle": [], |
| "last": "Baird", |
| "suffix": "" |
| }, |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Schuller", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "3638--3642", |
| "other_ids": { |
| "DOI": [ |
| "10.21437/Interspeech.2018-47" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emilia Parada-Cabaleiro, Giovanni Costantini, Anton Batliner, Alice Baird, and Bj\u00f6rn Schuller. 2018. Cat- egorical vs Dimensional Perception of Italian Emo- tional Speech. In Interspeech 2018, pages 3638- 3642. ISCA.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "DEMoS: an Italian emotional speech corpus. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Emilia", |
| "middle": [], |
| "last": "Parada-Cabaleiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Costantini", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Batliner", |
| "suffix": "" |
| }, |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Schmitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [ |
| "W" |
| ], |
| "last": "Schuller", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-019-09450-y" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emilia Parada-Cabaleiro, Giovanni Costantini, Anton Batliner, Maximilian Schmitt, and Bj\u00f6rn W. Schuller. 2019. DEMoS: an Italian emotional speech corpus. Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Affect and Emotion in Human-Computer Interaction: From Theory to Applications", |
| "authors": [ |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Russell", |
| "middle": [], |
| "last": "Beale", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Lecture Notes in Computer Science. Springer Science & Business Media. Google-Books", |
| "volume": "4868", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christian Peter and Russell Beale, editors. 2008. Af- fect and Emotion in Human-Computer Interaction: From Theory to Applications. Number 4868 in Lecture Notes in Computer Science. Springer Sci- ence & Business Media. Google-Books-ID: Bwd- cWtO666EC.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The SmartKom Multimodal Corpus at BAS", |
| "authors": [ |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Schiel", |
| "suffix": "" |
| }, |
| { |
| "first": "Silke", |
| "middle": [], |
| "last": "Steininger", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulrich", |
| "middle": [], |
| "last": "T\u00fcrk", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Third International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "200--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Florian Schiel, Silke Steininger, and Ulrich T\u00fcrk. 2002. The SmartKom Multimodal Corpus at BAS. In Third International Conference on Language Re- sources and Evaluation, pages 200-206, Las Palmas, Canary Islands, Spain. Citeseer.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Acoustic emotion recognition: A benchmark comparison of performances", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schuller", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Vlasenko", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Eyben", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Rigoll", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Wendemuth", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "IEEE Workshop on Automatic Speech Recognition Understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "552--557", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ASRU.2009.5372886" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Schuller, B. Vlasenko, F. Eyben, G. Rigoll, and A. Wendemuth. 2009a. Acoustic emotion recogni- tion: A benchmark comparison of performances. In 2009 IEEE Workshop on Automatic Speech Recogni- tion Understanding, pages 552-557.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The INTERSPEECH 2009 emotion challenge", |
| "authors": [ |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Schuller", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Steidl", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Batliner", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "10th Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "312--315", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bj\u00f6rn Schuller, Stefan Steidl, and Anton Batliner. 2009b. The INTERSPEECH 2009 emotion chal- lenge. In 10th Annual Conference of the Interna- tional Speech Communication Association, pages 312-315, Brighton, United Kingdom.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Deep neural networks for acoustic emotion recognition: Raising the benchmarks", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Stuhlsatz", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Meyer", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Eyben", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Zielke", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Meier", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schuller", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "5688--5691", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ICASSP.2011.5947651" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Stuhlsatz, C. Meyer, F. Eyben, T. Zielke, G. Meier, and B. Schuller. 2011. Deep neural networks for acoustic emotion recognition: Raising the bench- marks. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5688-5691.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Attentionaugmented End-to-end Multi-task Learning for Emotion Prediction from Speech", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schuller", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "6705--6709", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ICASSP.2019.8682896" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Zhang, B. Wu, and B. Schuller. 2019. Attention- augmented End-to-end Multi-task Learning for Emotion Prediction from Speech. In ICASSP 2019 -2019 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 6705-6709.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Exploring Deep Spectrum Representations via Attention-Based Recurrent and Convolutional Neural Networks for Speech Emotion Recognition", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Bao", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Cummins", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Schuller", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "IEEE Access", |
| "volume": "7", |
| "issue": "", |
| "pages": "97515--97525", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ACCESS.2019.2928625" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Zhao, Z. Bao, Y. Zhao, Z. Zhang, N. Cummins, Z. Ren, and B. Schuller. 2019. Exploring Deep Spec- trum Representations via Attention-Based Recur- rent and Convolutional Neural Networks for Speech Emotion Recognition. IEEE Access, 7:97515- 97525.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "ing permission or signing an academic license. The open datasets used in this study are: Canadian-French emotional dataset (Gournay et al., 2018), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D) (Cao et al., 2014), EMO-DB (Burkhardt et al., 2005), eNTERFACE dataset (Martin et al., 2006), JL corpus (James et al., 2018), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) (Livingstone and Russo, 2018), Sharif Emotional Speech Database (ShEMO) (Mohamad Nezami et al., 2019), and the Toronto Emotional Speech Set (TESS) (Dupuis and Pichora-Fuller, 2011).", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "Dataset emotion distribution. The number of clips in each of the 'big six' emotions along with neutral and other, is given, as well as the total number of clips in each dataset. oth. = other (dataset specific); unk. = unknown", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF3": { |
| "text": "Summary table of model parameters. CNN: convolutional neural network. RNN: recurrent neural network. Att.: attention pooling. The number of parameters for the Aldeneh model depends on the number of frequency bands in the input.", |
| "content": "<table><tr><td>models were selected with the goal of having a</td></tr><tr><td>variety of model types (convolutional and recur-</td></tr><tr><td>rent), variety of input formats (spectrogram and</td></tr><tr><td>raw audio), and recency (within the past few years).</td></tr><tr><td>After each model is introduced with citation, it will</td></tr><tr><td>subsequently be referred to by the primary author's</td></tr><tr><td>surname. A summary table of model types and</td></tr><tr><td>number of parameters is given in Table 2. Each</td></tr><tr><td>model outputs are probability distribution over N</td></tr><tr><td>classes.</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF5": { |
| "text": "Table ofresults. All values are given in UAR. A1: Aldeneh model with variable 40 log-mels. A2: Aldeneh model with variable 240 log-mels. A3: Aldeneh model with fixed 5s 240-mel spectrogram. L: Latif model with 5s raw audio. N: Zhang model with 5s raw audio. O: Zhao model with fixed 5s 40-mel spectrogram. Human accuracy is the average accuracy of a human rater, either tested in the relevant citation, or calculated directly from annotations (e.g. CREMA-D).", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |