aashay-sarvam commited on
Commit
c68a26f
·
verified ·
1 Parent(s): 11e0101

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +79 -53
README.md CHANGED
@@ -21,83 +21,109 @@ tags:
21
  - tts
22
  - benchmark
23
  - indian-languages
24
- dataset_info:
25
- features:
26
- - name: text
27
- dtype: string
28
- - name: language
29
- dtype: string
30
- - name: usecase
31
- dtype: string
32
- - name: eval_category
33
- dtype: string
34
- splits:
35
- - name: train
36
- num_bytes: 801182
37
- num_examples: 1815
38
- download_size: 355512
39
- dataset_size: 801182
40
- configs:
41
- - config_name: default
42
- data_files:
43
- - split: train
44
- path: data/train-*
45
  ---
46
 
47
  # TTS General Benchmark
48
 
49
- A multilingual Text-to-Speech (TTS) benchmark dataset covering 11 Indian languages across 13 diverse use cases.
50
 
51
- ## Dataset Description
 
 
52
 
53
- This dataset is designed for evaluating TTS systems on Indian languages. It contains 1,261 unique text samples spanning real-world scenarios from conversational AI to audiobook narration.
54
 
55
- ### Languages
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
- Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu
58
 
59
- ### Use Cases
 
 
 
 
60
 
61
  | Use Case | Samples |
62
- |----------|---------|
63
- | Conversational Bots | 274 |
64
  | Audiobook | 132 |
65
  | Information Narration / News | 121 |
66
  | General Conversations | 110 |
67
  | Education | 110 |
68
- | AI Assistants | 109 |
69
- | Content Creation | 109 |
70
- | Culture | 76 |
71
- | Announcements | 66 |
72
- | Emergency Alert | 11 |
73
- | Public Notice | 33 |
74
  | Indianisms | 55 |
75
  | Insane Repetition | 55 |
76
 
77
- ## Dataset Structure
78
 
79
- Each sample contains the following fields:
80
 
81
- ```json
82
- {
83
- "text": "The text to be synthesized",
84
- "language": "hi",
85
- "usecase": "Conversational Bots"
86
- }
87
- ```
 
 
 
 
 
 
 
 
 
 
88
 
89
- ### Fields
90
 
91
- - **text** (`string`): The input text for TTS synthesis
92
- - **language** (`string`): ISO 639-1 language code
93
- - **usecase** (`string`): The application domain/category of the text
 
 
 
 
 
 
 
 
 
 
94
 
95
- ## Usage
96
 
97
- ```python
98
- from datasets import load_dataset
99
 
100
- dataset = load_dataset("sarvamai/tts-general-benchmark")
101
- ```
102
 
 
103
 
 
 
 
 
 
 
 
 
21
  - tts
22
  - benchmark
23
  - indian-languages
24
+ - telephony
25
+ - evaluation
26
+ - multilingual
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ---
28
 
29
  # TTS General Benchmark
30
 
31
+ A multilingual Text-to-Speech (TTS) evaluation benchmark covering 11 Indian languages across multiple real-world use cases. The dataset is designed for systematic and repeatable evaluation of TTS systems under both high-quality and telephony bandwidth conditions.
32
 
33
+ **Total prompts:** 1,815 unique text samples
34
+ **Languages:** 11
35
+ **Evaluation tracks:** High Quality + 8 kHz Telephony
36
 
37
+ > This is an **evaluation-only benchmark dataset** intended for testing and comparison not for model training.
38
 
39
+ ---
40
+
41
+ # Dataset Overview
42
+
43
+ The TTS General Benchmark provides diverse prompts that reflect practical deployment scenarios such as conversational agents, announcements, narration, support calls, and telephony bots. Prompts are curated to test clarity, robustness, pronunciation handling, and expressive capability.
44
+
45
+ Each prompt is labeled with:
46
+
47
+ - language
48
+ - usecase
49
+ - eval_category (evaluation track)
50
+
51
+ The dataset contains **two independently evaluated tracks** with different prompt distributions.
52
+
53
+ ---
54
 
55
+ # Evaluation Categories
56
 
57
+ ## high_quality
58
+
59
+ Full-band prompts intended for studio / wideband TTS evaluation. These focus on naturalness, expressiveness, and content realism.
60
+
61
+ ### High Quality Use Cases
62
 
63
  | Use Case | Samples |
64
+ |----------|----------|
65
+ | Conversational Bots | 275 |
66
  | Audiobook | 132 |
67
  | Information Narration / News | 121 |
68
  | General Conversations | 110 |
69
  | Education | 110 |
70
+ | AI Assistants | 110 |
71
+ | Content Creation | 110 |
72
+ | Culture | 77 |
73
+ | Announcements | 110 |
 
 
74
  | Indianisms | 55 |
75
  | Insane Repetition | 55 |
76
 
77
+ **High-quality total:** 1,265
78
 
79
+ ---
80
 
81
+ ## 8khz_telephony
82
+
83
+ Narrowband prompts designed for telephony and call-center evaluation (8 kHz playback target). These measure intelligibility, clarity, and robustness under bandwidth constraints.
84
+
85
+ ### Telephony Use Cases
86
+
87
+ | Use Case | Samples |
88
+ |----------|----------|
89
+ | collections | 110 |
90
+ | edge_cases | 110 |
91
+ | sales_bot | 110 |
92
+ | support | 110 |
93
+ | survey_bot | 110 |
94
+
95
+ **Telephony total:** 550
96
+
97
+ ---
98
 
99
+ # Supported Languages
100
 
101
+ | Language | Code |
102
+ |-----------|--------|
103
+ English | en |
104
+ Hindi | hi |
105
+ Bengali | bn |
106
+ Tamil | ta |
107
+ Telugu | te |
108
+ Kannada | kn |
109
+ Malayalam | ml |
110
+ Marathi | mr |
111
+ Gujarati | gu |
112
+ Odia | od |
113
+ Punjabi | pa |
114
 
115
+ Language coverage is shared across both evaluation tracks.
116
 
117
+ ---
 
118
 
119
+ # Dataset Structure
 
120
 
121
+ Each JSONL row contains:
122
 
123
+ ```json
124
+ {
125
+ "text": "The text to be synthesized",
126
+ "language": "hi",
127
+ "usecase": "Conversational Bots",
128
+ "eval_category": "high_quality"
129
+ }