Datasets:

Modalities:
Audio
Text
ArXiv:
AdalAbilbekov commited on
Commit
faae8a5
·
verified ·
1 Parent(s): cfd78b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +207 -13
README.md CHANGED
@@ -1,19 +1,213 @@
1
- ## KazEmoTTS: A Kazakh Emotional Text-to-Speech Dataset
2
 
3
- **Summary:**
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
- The KazEmoTTS dataset is a large-scale resource for emotional text-to-speech (TTS) in the Kazakh language. It comprises 54,760 audio-text pairs totaling 74.85 hours of speech data. The dataset includes six emotions: neutral, angry, happy, sad, scared, and surprised. The recordings were contributed by three narrators (one female, two male). A TTS model trained on this dataset achieved Mean Character Distance (MCD) scores between 6.02 and 7.67 and Mean Opinion Score (MOS) values between 3.51 and 3.57, demonstrating its effectiveness. The associated code and pre-trained model are publicly available (link to GitHub repository not provided due to 404 error).
 
6
 
7
- **Dataset Statistics:**
 
 
 
 
 
 
 
8
 
9
- | Statistic | Value |
10
- |--------------------|-----------------|
11
- | Number of Samples | 54,760 |
12
- | Total Duration | 74.85 hours |
13
- | Female Narrator | 34.23 hours |
14
- | Male Narrators | 40.62 hours |
15
- | Emotions | Neutral, Angry, Happy, Sad, Scared, Surprised |
16
- | MCD Score | 6.02 - 7.67 |
17
- | MOS Score | 3.51 - 3.57 |
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">KazEmoTTS <br> ⌨️ 😐 😠 🙂 😞 😱 😮 🗣</h1>
2
 
3
+ <p align="center">
4
+ <a href="https://github.com/IS2AI/KazEmoTTS/stargazers">
5
+ <img src="https://img.shields.io/github/stars/IS2AI/KazEmoTTS"
6
+ alt="GitHub stars">
7
+ </a>
8
+ <a href="https://github.com/IS2AI/KazEmoTTS/issues">
9
+ <img src="https://img.shields.io/github/issues/IS2AI/KazEmoTTS.svg"
10
+ alt="GitHub issues">
11
+ </a>
12
+ <a href="https://issai.nu.edu.kz">
13
+ <img src="https://img.shields.io/static/v1?label=ISSAI&amp;message=official site&amp;color=blue&amp"
14
+ alt="ISSAI Official Website">
15
+ </a>
16
+ </p>
17
 
18
+ <p align = "center">This repository provides a <a href="https://docs.google.com/forms/d/e/1FAIpQLSeTg88cvRbZkR5Go1p0IkQxFnOJv2KL6j2WVcsa6ut4XzQp5g/viewform">dataset</a> and a text-to-speech (TTS) model for the paper <br><a href = "https://arxiv.org/pdf/2404.01033.pdf"><b>KazEmoTTS:
19
+ A Dataset for Kazakh Emotional Text-to-Speech Synthesis</b></a></p>
20
 
21
+ <h2 align = "justify">Summary: </h2>
22
+ <p align = "justify">This study focuses on the creation of the KazEmoTTS dataset, designed for emotional Kazakh text-to-speech (TTS) applications.
23
+ KazEmoTTS is a collection of 54,760 audio-text pairs, with a total duration of 74.85 hours, featuring 34.23 hours delivered
24
+ by a female narrator and 40.62 hours by two male narrators. The list of the emotions considered include “neutral”, “angry”,
25
+ “happy”, “sad”, “scared”, and “surprised”. We also developed a TTS model trained on the KazEmoTTS dataset. Objective and
26
+ subjective evaluations were employed to assess the quality of synthesized speech, yielding an MCD score within the range of
27
+ 6.02 to 7.67, alongside a MOS that spanned from 3.51 to 3.57. To facilitate reproducibility and inspire further research, we
28
+ have made our code, pre-trained model, and dataset accessible in our <a href = "https://github.com/IS2AI/KazEmoTTS/tree/master">GitHub repository</a>.</p>
29
 
30
+ <a name = "stats"><h2>Dataset Statistics 📊</h2></a>
 
 
 
 
 
 
 
 
31
 
32
+ <table align = "center">
33
+ <thead align = "center">
34
+ <tr>
35
+ <th rowspan="3">Emotion</th>
36
+ <th rowspan="3"># recordings</th>
37
+ <th colspan="4">Narrator F1</th>
38
+ <th colspan="4">Narrator M1</th>
39
+ <th colspan="4">Narrator M2</th>
40
+ </tr>
41
+ <tr></tr>
42
+ <tr>
43
+ <th>Total (h)</th>
44
+ <th>Mean (s)</th>
45
+ <th>Min (s)</th>
46
+ <th>Max (s)</th>
47
+ <th>Total (h)</th>
48
+ <th>Mean (s)</th>
49
+ <th>Min (s)</th>
50
+ <th>Max (s)</th>
51
+ <th>Total (h)</th>
52
+ <th>Mean (s)</th>
53
+ <th>Min (s)</th>
54
+ <th>Max (s)</th>
55
+ </tr>
56
+ </thead>
57
+ <tbody align = "center">
58
+ <tr>
59
+ <td>neutral</td>
60
+ <td>9,385</td>
61
+ <td>5.85</td>
62
+ <td>5.03</td>
63
+ <td>1.03</td>
64
+ <td>15.51</td>
65
+ <td>4.54</td>
66
+ <td>4.77</td>
67
+ <td>0.84</td>
68
+ <td>16.18</td>
69
+ <td>2.30</td>
70
+ <td>4.69</td>
71
+ <td>1.02</td>
72
+ <td>15.81</td>
73
+ </tr>
74
+ <tr></tr>
75
+ <tr>
76
+ <td>angry</td>
77
+ <td>9,059</td>
78
+ <td>5.44</td>
79
+ <td>4.78</td>
80
+ <td>1.11</td>
81
+ <td>14.09</td>
82
+ <td>4.27</td>
83
+ <td>4.75</td>
84
+ <td>0.93</td>
85
+ <td>17.03</td>
86
+ <td>2.31</td>
87
+ <td>4.81</td>
88
+ <td>1.02</td>
89
+ <td>15.67</td>
90
+ </tr>
91
+ <tr></tr>
92
+ <tr>
93
+ <td>happy</td>
94
+ <td>9,059</td>
95
+ <td>5.77</td>
96
+ <td>5.09</td>
97
+ <td>1.07</td>
98
+ <td>15.33</td>
99
+ <td>4.43</td>
100
+ <td>4.85</td>
101
+ <td>0.98</td>
102
+ <td>15.56</td>
103
+ <td>2.23</td>
104
+ <td>4.74</td>
105
+ <td>1.09</td>
106
+ <td>15.25</td>
107
+ </tr>
108
+ <tr></tr>
109
+ <tr>
110
+ <td>sad</td>
111
+ <td>8,980</td>
112
+ <td>5.60</td>
113
+ <td>5.04</td>
114
+ <td>1.11</td>
115
+ <td>15.21</td>
116
+ <td>4.62</td>
117
+ <td>5.13</td>
118
+ <td>0.72</td>
119
+ <td>18.00</td>
120
+ <td>2.65</td>
121
+ <td>5.52</td>
122
+ <td>1.16</td>
123
+ <td>18.16</td>
124
+ </tr>
125
+ <tr></tr>
126
+ <tr>
127
+ <td>scared</td>
128
+ <td>9,098</td>
129
+ <td>5.66</td>
130
+ <td>4.96</td>
131
+ <td>1.00</td>
132
+ <td>15.67</td>
133
+ <td>4.13</td>
134
+ <td>4.51</td>
135
+ <td>0.65</td>
136
+ <td>16.11</td>
137
+ <td>2.34</td>
138
+ <td>4.96</td>
139
+ <td>1.07</td>
140
+ <td>14.49</td>
141
+ </tr>
142
+ <tr></tr>
143
+ <tr>
144
+ <td>surprised</td>
145
+ <td>9,179</td>
146
+ <td>5.91</td>
147
+ <td>5.09</td>
148
+ <td>1.09</td>
149
+ <td>14.56</td>
150
+ <td>4.52</td>
151
+ <td>4.92</td>
152
+ <td>0.81</td>
153
+ <td>17.67</td>
154
+ <td>2.28</td>
155
+ <td>4.87</td>
156
+ <td>1.04</td>
157
+ <td>15.81</td>
158
+ </tr>
159
+ </tbody>
160
+ </table>
161
 
162
+ <table align = "center">
163
+ <thead align = "center">
164
+ <tr>
165
+ <th>Narrator</th>
166
+ <th># recordings</th>
167
+ <th>Duration (h)</th>
168
+ </tr>
169
+ </thead>
170
+ <tbody align = "center">
171
+ <tr>
172
+ <td>F1</td>
173
+ <td>24,656</td>
174
+ <td>34.23</td>
175
+ </tr>
176
+ <tr></tr>
177
+ <tr>
178
+ <td>M1</td>
179
+ <td>19,802</td>
180
+ <td>26.51</td>
181
+ </tr>
182
+ <tr></tr>
183
+ <tr>
184
+ <td>M2</td>
185
+ <td>10,302</td>
186
+ <td>14.11</td>
187
+ </tr>
188
+ <tr></tr>
189
+ <tr>
190
+ <td><b>Total</b></td>
191
+ <td><b>54,760</b></td>
192
+ <td><b>74.85</b></td>
193
+ </tr>
194
+ </tbody>
195
+ </table>
196
+
197
+ <h2 align = "justify">Synthesized samples 🔈</h2>
198
+ <p align = "justify">You can listen to some synthesized samples <a href = "https://anonimous4849.github.io">here</a>.</p>
199
+
200
+ <h2 align = "justify">Citation 🎓</h2>
201
+
202
+ <p align = "justify">We kindly urge you, if you incorporate our dataset and/or model into your work, to cite our paper as a gesture of recognition for its valuable contribution. The act of referencing the relevant sources not only upholds academic honesty but also ensures proper acknowledgement of the authors' efforts. Your citation in your research significantly contributes to the continuous progress and evolution of the scholarly realm. Your endorsement and acknowledgement of our endeavours are genuinely appreciated.
203
+
204
+ ```bibtex
205
+ @misc{abilbekov2024kazemotts,
206
+ title={KazEmoTTS: A Dataset for Kazakh Emotional Text-to-Speech Synthesis},
207
+ author={Adal Abilbekov and Saida Mussakhojayeva and Rustem Yeshpanov and Huseyin Atakan Varol},
208
+ year={2024},
209
+ eprint={2404.01033},
210
+ archivePrefix={arXiv},
211
+ primaryClass={eess.AS}
212
+ }
213
+ ```