Datasets:

Modalities:
Audio
Text
ArXiv:
rassulya commited on
Commit
1983c35
·
verified ·
1 Parent(s): 53b0664

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +27 -10
README.md CHANGED
@@ -2,33 +2,50 @@
2
 
3
  **Dataset Summary:**
4
 
5
- The KazEmoTTS dataset is a collection of 54,760 audio-text pairs, totaling 74.85 hours of emotional speech in the Kazakh language. The dataset features recordings from three narrators (one female, two male), expressing six emotions: neutral, angry, happy, sad, scared, and surprised. A text-to-speech model trained on this dataset is also available. The quality of the synthesized speech was evaluated using both objective (MCD) and subjective (MOS) metrics.
6
 
7
 
8
- **Dataset Structure:**
9
 
10
- The dataset contains audio files paired with their corresponding text transcriptions. The audio is categorized by emotion and speaker.
 
 
 
 
11
 
12
- **Evaluation Metrics:**
13
 
14
- The synthesized speech quality was assessed using Mean Opinion Score (MOS) and Mel Cepstral Distortion (MCD). The MOS scores ranged from 3.51 to 3.57, and the MCD scores fell between 6.02 and 7.67.
 
 
15
 
16
 
17
- **Languages:**
18
 
19
- Kazakh
20
 
21
 
22
  **License:**
23
 
24
- [Please specify the license here. This information was not provided in the source text.]
25
 
26
 
27
  **Citation:**
28
 
29
- [Please add citation information here. This information was not provided in the source text.]
30
 
31
 
32
  **Contact:**
33
 
34
- [Please add contact information here if available. This information was not provided in the source text.]
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  **Dataset Summary:**
4
 
5
+ The KazEmoTTS dataset is a collection of 54,760 audio-text pairs totaling 74.85 hours of emotional speech in the Kazakh language. The dataset includes recordings from three narrators (one female, two male) expressing six emotions: neutral, angry, happy, sad, scared, and surprised. A text-to-speech (TTS) model trained on this dataset is also available. The dataset aims to facilitate research and development in emotional TTS for the Kazakh language.
6
 
7
 
8
+ **Dataset Characteristics:**
9
 
10
+ * **Size:** 54,760 audio-text pairs
11
+ * **Duration:** 74.85 hours (34.23 hours female, 40.62 hours male)
12
+ * **Languages:** Kazakh
13
+ * **Emotions:** Neutral, Angry, Happy, Sad, Scared, Surprised
14
+ * **Number of Speakers:** 3 (1 female, 2 male)
15
 
 
16
 
17
+ **Data Fields:**
18
+
19
+ The dataset contains audio files and corresponding text transcripts. Specific field names and formats are available in the linked repository.
20
 
21
 
22
+ **Evaluation Metrics:**
23
 
24
+ The quality of the synthesized speech generated by a model trained on the KazEmoTTS dataset was evaluated using Mean Opinion Score (MOS) and Mel-Cepstral Distortion (MCD). MOS ranged from 3.51 to 3.57, and MCD ranged from 6.02 to 7.67.
25
 
26
 
27
  **License:**
28
 
29
+ [Insert License Here - This information is missing from the provided text]
30
 
31
 
32
  **Citation:**
33
 
34
+ [Insert Citation Here - This information is missing from the provided text]
35
 
36
 
37
  **Contact:**
38
 
39
+ [Insert Contact Information Here - This information is missing from the provided text]
40
+
41
+
42
+ **How to Use:**
43
+
44
+ Instructions on how to access and utilize the dataset are provided within the linked repository.
45
+
46
+
47
+ **Links:**
48
+
49
+ * **Dataset Repository:** [Insert GitHub Repository Link Here]
50
+ * **Model Repository:** [Insert GitHub Model Repository Link Here, if different from dataset]
51
+