Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -106,7 +106,57 @@ SEIR-DB contains multilingual data. With languages such as English, Russian, Man
|
|
| 106 |
|
| 107 |
### Citation Information
|
| 108 |
|
| 109 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
|
| 111 |
### Contributions
|
| 112 |
|
|
|
|
| 106 |
|
| 107 |
### Citation Information
|
| 108 |
|
| 109 |
+
Aljuhani, R. H., Alshutayri, A., & Alahdal, S. (2021). Arabic speech emotion recognition from Saudi dialect corpus. IEEE Access, 9, 127081-127085.
|
| 110 |
+
|
| 111 |
+
Basu, S., Chakraborty, J., & Aftabuddin, M. (2017). Emotion recognition from speech using convolutional neural network with recurrent neural network architecture. In ICCES.
|
| 112 |
+
|
| 113 |
+
Baevski, A., Zhou, H. H., & Collobert, R. (2020). Wav2vec 2.0: A framework for self-supervised learning of speech representations. In NeurIPS.
|
| 114 |
+
|
| 115 |
+
Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., ... & Narayanan, S. (2008). Iemocap: Interactive emotional dyadic motion capture database. In LREC.
|
| 116 |
+
|
| 117 |
+
Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5, 377-390.
|
| 118 |
+
|
| 119 |
+
Chopra, S., Mathur, P., Sawhney, R., & Shah, R. R. (2021). Meta-Learning for Low-Resource Speech Emotion Recognition. In ICASSP.
|
| 120 |
+
|
| 121 |
+
Costantini, G., Iaderola, I., Paoloni, A., & Todisco, M. (2014). EMOVO Corpus: an Italian Emotional Speech Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) (pp. 3501-3504). European Language Resources Association (ELRA). Reykjavik, Iceland. http://www.lrec-conf.org/proceedings/lrec2014/pdf/591_Paper.pdf
|
| 122 |
+
|
| 123 |
+
Duville, Mathilde Marie; Alonso-Valerdi, Luz María; Ibarra-Zarate, David I. (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
|
| 124 |
+
|
| 125 |
+
Gournay, Philippe, Lahaie, Olivier, & Lefebvre, Roch. (2018). A Canadian French Emotional Speech Dataset (1.1) [Data set]. ACM Multimedia Systems Conference (MMSys 2018) (MMSys'18), Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.1478765
|
| 126 |
+
|
| 127 |
+
Kandali, A., Routray, A., & Basu, T. (2008). Emotion recognition from Assamese speeches using MFCC features and GMM classifier. In TENCON.
|
| 128 |
+
|
| 129 |
+
Kondratenko, V., Sokolov, A., Karpov, N., Kutuzov, O., Savushkin, N., & Minkin, F. (2022). Large Raw Emotional Dataset with Aggregation Mechanism. arXiv preprint arXiv:2212.12266.
|
| 130 |
+
|
| 131 |
+
Kwon, S. (2021). MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Systems with Applications, 167, 114177.
|
| 132 |
+
|
| 133 |
+
Lee, Y., Lee, J. W., & Kim, S. (2019). Emotion recognition using convolutional neural network and multiple feature fusion. In ICASSP.
|
| 134 |
+
|
| 135 |
+
Li, Y., Baidoo, C., Cai, T., & Kusi, G. A. (2019). Speech emotion recognition using 1d cnn with no attention. In ICSEC.
|
| 136 |
+
|
| 137 |
+
Lian, Z., Tao, J., Liu, B., Huang, J., Yang, Z., & Li, R. (2020). Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition. In Interspeech.
|
| 138 |
+
|
| 139 |
+
Livingstone, S. R., & Russo, F. A. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.
|
| 140 |
+
|
| 141 |
+
Peng, Z., Li, X., Zhu, Z., Unoki, M., Dang, J., & Akagi, M. (2020). Speech emotion recognition using 3d convolutions and attention-based sliding recurrent networks with auditory front-ends. IEEE Access, 8, 16560-16572.
|
| 142 |
+
|
| 143 |
+
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2019). Meld: A multimodal multi-party dataset for emotion recognition in conversations. In ACL.
|
| 144 |
+
|
| 145 |
+
Schneider, A., Baevski, A., & Collobert, R. (2019). Wav2vec: Unsupervised pre-training for speech recognition. In ICLR.
|
| 146 |
+
|
| 147 |
+
Schuller, B., Rigoll, G., & Lang, M. (2010). Speech emotion recognition: Features and classification models. In Interspeech.
|
| 148 |
+
|
| 149 |
+
Sinnott, R. O., Radulescu, A., & Kousidis, S. (2013). Surrey audiovisual expressed emotion (savee) database. In AVEC.
|
| 150 |
+
|
| 151 |
+
Vryzas, N., Kotsakis, R., Liatsou, A., Dimoulas, C. A., & Kalliris, G. (2018). Speech emotion recognition for performance interaction. Journal of the Audio Engineering Society, 66(6), 457-467.
|
| 152 |
+
|
| 153 |
+
Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., & Kalliris, G. (2018, September). Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion (p. 34). ACM.
|
| 154 |
+
|
| 155 |
+
Wang, Y., Yang, Y., Liu, Y., Chen, Y., Han, N., & Zhou, J. (2019). Speech emotion recognition using a combination of cnn and rnn. In Interspeech.
|
| 156 |
+
|
| 157 |
+
Yoon, S., Byun, S., & Jung, K. (2018). Multimodal speech emotion recognition using audio and text. In SLT.
|
| 158 |
+
|
| 159 |
+
Zhang, R., & Liu, M. (2020). Speech emotion recognition with self-attention. In ACL.
|
| 160 |
|
| 161 |
### Contributions
|
| 162 |
|