File size: 5,905 Bytes
faae8a5 53b0664 faae8a5 53b0664 faae8a5 53b0664 faae8a5 53b0664 faae8a5 53b0664 faae8a5 53b0664 faae8a5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | <h1 align="center">KazEmoTTS <br> ⌨️ 😐 😠 🙂 😞 😱 😮 🗣</h1>
<p align="center">
<a href="https://github.com/IS2AI/KazEmoTTS/stargazers">
<img src="https://img.shields.io/github/stars/IS2AI/KazEmoTTS"
alt="GitHub stars">
</a>
<a href="https://github.com/IS2AI/KazEmoTTS/issues">
<img src="https://img.shields.io/github/issues/IS2AI/KazEmoTTS.svg"
alt="GitHub issues">
</a>
<a href="https://issai.nu.edu.kz">
<img src="https://img.shields.io/static/v1?label=ISSAI&message=official site&color=blue&"
alt="ISSAI Official Website">
</a>
</p>
<p align = "center">This repository provides a <a href="https://docs.google.com/forms/d/e/1FAIpQLSeTg88cvRbZkR5Go1p0IkQxFnOJv2KL6j2WVcsa6ut4XzQp5g/viewform">dataset</a> and a text-to-speech (TTS) model for the paper <br><a href = "https://arxiv.org/pdf/2404.01033.pdf"><b>KazEmoTTS:
A Dataset for Kazakh Emotional Text-to-Speech Synthesis</b></a></p>
<h2 align = "justify">Summary: </h2>
<p align = "justify">This study focuses on the creation of the KazEmoTTS dataset, designed for emotional Kazakh text-to-speech (TTS) applications.
KazEmoTTS is a collection of 54,760 audio-text pairs, with a total duration of 74.85 hours, featuring 34.23 hours delivered
by a female narrator and 40.62 hours by two male narrators. The list of the emotions considered include “neutral”, “angry”,
“happy”, “sad”, “scared”, and “surprised”. We also developed a TTS model trained on the KazEmoTTS dataset. Objective and
subjective evaluations were employed to assess the quality of synthesized speech, yielding an MCD score within the range of
6.02 to 7.67, alongside a MOS that spanned from 3.51 to 3.57. To facilitate reproducibility and inspire further research, we
have made our code, pre-trained model, and dataset accessible in our <a href = "https://github.com/IS2AI/KazEmoTTS/tree/master">GitHub repository</a>.</p>
<a name = "stats"><h2>Dataset Statistics 📊</h2></a>
<table align = "center">
<thead align = "center">
<tr>
<th rowspan="3">Emotion</th>
<th rowspan="3"># recordings</th>
<th colspan="4">Narrator F1</th>
<th colspan="4">Narrator M1</th>
<th colspan="4">Narrator M2</th>
</tr>
<tr></tr>
<tr>
<th>Total (h)</th>
<th>Mean (s)</th>
<th>Min (s)</th>
<th>Max (s)</th>
<th>Total (h)</th>
<th>Mean (s)</th>
<th>Min (s)</th>
<th>Max (s)</th>
<th>Total (h)</th>
<th>Mean (s)</th>
<th>Min (s)</th>
<th>Max (s)</th>
</tr>
</thead>
<tbody align = "center">
<tr>
<td>neutral</td>
<td>9,385</td>
<td>5.85</td>
<td>5.03</td>
<td>1.03</td>
<td>15.51</td>
<td>4.54</td>
<td>4.77</td>
<td>0.84</td>
<td>16.18</td>
<td>2.30</td>
<td>4.69</td>
<td>1.02</td>
<td>15.81</td>
</tr>
<tr></tr>
<tr>
<td>angry</td>
<td>9,059</td>
<td>5.44</td>
<td>4.78</td>
<td>1.11</td>
<td>14.09</td>
<td>4.27</td>
<td>4.75</td>
<td>0.93</td>
<td>17.03</td>
<td>2.31</td>
<td>4.81</td>
<td>1.02</td>
<td>15.67</td>
</tr>
<tr></tr>
<tr>
<td>happy</td>
<td>9,059</td>
<td>5.77</td>
<td>5.09</td>
<td>1.07</td>
<td>15.33</td>
<td>4.43</td>
<td>4.85</td>
<td>0.98</td>
<td>15.56</td>
<td>2.23</td>
<td>4.74</td>
<td>1.09</td>
<td>15.25</td>
</tr>
<tr></tr>
<tr>
<td>sad</td>
<td>8,980</td>
<td>5.60</td>
<td>5.04</td>
<td>1.11</td>
<td>15.21</td>
<td>4.62</td>
<td>5.13</td>
<td>0.72</td>
<td>18.00</td>
<td>2.65</td>
<td>5.52</td>
<td>1.16</td>
<td>18.16</td>
</tr>
<tr></tr>
<tr>
<td>scared</td>
<td>9,098</td>
<td>5.66</td>
<td>4.96</td>
<td>1.00</td>
<td>15.67</td>
<td>4.13</td>
<td>4.51</td>
<td>0.65</td>
<td>16.11</td>
<td>2.34</td>
<td>4.96</td>
<td>1.07</td>
<td>14.49</td>
</tr>
<tr></tr>
<tr>
<td>surprised</td>
<td>9,179</td>
<td>5.91</td>
<td>5.09</td>
<td>1.09</td>
<td>14.56</td>
<td>4.52</td>
<td>4.92</td>
<td>0.81</td>
<td>17.67</td>
<td>2.28</td>
<td>4.87</td>
<td>1.04</td>
<td>15.81</td>
</tr>
</tbody>
</table>
<table align = "center">
<thead align = "center">
<tr>
<th>Narrator</th>
<th># recordings</th>
<th>Duration (h)</th>
</tr>
</thead>
<tbody align = "center">
<tr>
<td>F1</td>
<td>24,656</td>
<td>34.23</td>
</tr>
<tr></tr>
<tr>
<td>M1</td>
<td>19,802</td>
<td>26.51</td>
</tr>
<tr></tr>
<tr>
<td>M2</td>
<td>10,302</td>
<td>14.11</td>
</tr>
<tr></tr>
<tr>
<td><b>Total</b></td>
<td><b>54,760</b></td>
<td><b>74.85</b></td>
</tr>
</tbody>
</table>
<h2 align = "justify">Synthesized samples 🔈</h2>
<p align = "justify">You can listen to some synthesized samples <a href = "https://anonimous4849.github.io">here</a>.</p>
<h2 align = "justify">Citation 🎓</h2>
<p align = "justify">We kindly urge you, if you incorporate our dataset and/or model into your work, to cite our paper as a gesture of recognition for its valuable contribution. The act of referencing the relevant sources not only upholds academic honesty but also ensures proper acknowledgement of the authors' efforts. Your citation in your research significantly contributes to the continuous progress and evolution of the scholarly realm. Your endorsement and acknowledgement of our endeavours are genuinely appreciated.
```bibtex
@misc{abilbekov2024kazemotts,
title={KazEmoTTS: A Dataset for Kazakh Emotional Text-to-Speech Synthesis},
author={Adal Abilbekov and Saida Mussakhojayeva and Rustem Yeshpanov and Huseyin Atakan Varol},
year={2024},
eprint={2404.01033},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` |