jml2026 commited on
Commit
00402a9
·
verified ·
1 Parent(s): 369f63a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +271 -0
README.md ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ - de
6
+ - es
7
+ multilinguality:
8
+ - multilingual
9
+ task_categories:
10
+ - automatic-speech-recognition
11
+ - audio-classification
12
+ pretty_name: Multilingual Speech Sample
13
+ dataset_info:
14
+ - config_name: all_samples
15
+ features:
16
+ - name: id
17
+ dtype: int64
18
+ - name: gender
19
+ dtype: string
20
+ - name: ethnicity
21
+ dtype: string
22
+ - name: occupation
23
+ dtype: string
24
+ - name: country_code
25
+ dtype: string
26
+ - name: birth_place
27
+ dtype: string
28
+ - name: mother_tongue
29
+ dtype: string
30
+ - name: dialect
31
+ dtype: string
32
+ - name: year_of_birth
33
+ dtype: int64
34
+ - name: years_at_birth_place
35
+ dtype: int64
36
+ - name: languages_data
37
+ dtype: string
38
+ - name: os
39
+ dtype: string
40
+ - name: device
41
+ dtype: string
42
+ - name: browser
43
+ dtype: string
44
+ - name: duration
45
+ dtype: float64
46
+ - name: emotions
47
+ dtype: string
48
+ - name: language
49
+ dtype: string
50
+ - name: location
51
+ dtype: string
52
+ - name: noise_sources
53
+ dtype: string
54
+ - name: script_id
55
+ dtype: int64
56
+ - name: type_of_script
57
+ dtype: string
58
+ - name: script
59
+ dtype: string
60
+ - name: transcript
61
+ dtype: string
62
+ - name: transcription_segments
63
+ dtype: string
64
+ - name: audio
65
+ dtype: audio
66
+ - name: speaker_id
67
+ dtype: string
68
+ splits:
69
+ - name: train
70
+ num_examples: 1196
71
+ - config_name: english_united_states
72
+ splits:
73
+ - name: train
74
+ num_examples: 277
75
+ - config_name: english_nigeria
76
+ splits:
77
+ - name: train
78
+ num_examples: 265
79
+ - config_name: english_china
80
+ splits:
81
+ - name: train
82
+ num_examples: 185
83
+ - config_name: german_germany
84
+ splits:
85
+ - name: train
86
+ num_examples: 328
87
+ - config_name: spanish_mexico
88
+ splits:
89
+ - name: train
90
+ num_examples: 141
91
+ configs:
92
+ - config_name: all_samples
93
+ data_files:
94
+ - split: train
95
+ path: data/*/train-*.parquet
96
+ - config_name: english_united_states
97
+ data_files:
98
+ - split: train
99
+ path: data/english_united_states/train-*.parquet
100
+ - config_name: english_nigeria
101
+ data_files:
102
+ - split: train
103
+ path: data/english_nigeria/train-*.parquet
104
+ - config_name: english_china
105
+ data_files:
106
+ - split: train
107
+ path: data/english_china/train-*.parquet
108
+ - config_name: german_germany
109
+ data_files:
110
+ - split: train
111
+ path: data/german_germany/train-*.parquet
112
+ - config_name: spanish_mexico
113
+ data_files:
114
+ - split: train
115
+ path: data/spanish_mexico/train-*.parquet
116
+ size_categories:
117
+ - 1K<n<10K
118
+ ---
119
+ # Silencio Network: Multilingual Accent Speech Dataset (Sample)
120
+
121
+ <p align="left">
122
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/69162b50b89e7abe20de4b5a/LWhs4p2lPFcyiVsP0tluu.png" width="40%">
123
+ </p>
124
+
125
+ ## Overview
126
+
127
+ Silencio data is valuable because it’s collected in the wild from a massive, opt-in community (1.2M users across 180+ countries), giving buyers real-world accents, dialects, devices, and environments that lab or scraped datasets don’t capture. Every recording is tied to explicit, traceable consent and processed with privacy-first pipelines (GDPR/CCPA compliant, anonymized, PII hashed), which reduces legal risk for enterprise buyers. On top of that, the same community lets us scale quickly into hard-to-source languages and niches, so clients get both authenticity today and a credible path to large volumes tomorrow.
128
+
129
+ This dataset is a crowdsourced multilingual–accented English and non-English speech dataset designed for model training, benchmarking, and acoustic analysis. It emphasizes accent variation, short-form scripted prompts, and spontaneous free speech. All recordings were produced by contributors using their own devices, with Whisper-generated transcripts provided for every sample.
130
+
131
+ The dataset is structured for direct use in ASR, TTS, accent-classification, diarization-adjacent analysis, speech segmentation, and embedding evaluation.
132
+
133
+ ## Languages and Accents
134
+ This dataset covers five language–region pairs (to find out more about other combinations please reach out to us):
135
+
136
+ - **English (China)**: English spoken with Mandarin-influenced accent
137
+ - **English (Nigeria)**: Nigerian-accented English
138
+ - **English (United States)**: American English
139
+ - **German (Germany)**: Native German speakers
140
+ - **Spanish (Mexico)**: Native Mexican Spanish speakers
141
+
142
+ All recordings are stored as **48 kHz WAV** files.
143
+
144
+ ## Speech Types
145
+ Each sample belongs to one of three categories:
146
+
147
+ - **free_speech**: unscripted speech on a provided topic
148
+ - **keywords**: short isolated prompts containing specific phrases or terms
149
+ - **monologues**: longer scripted passages
150
+
151
+ These values appear in the field `type_of_script`.
152
+
153
+ ## Recording Conditions
154
+ All data is **crowdsourced**. Contributors record themselves using their available hardware and environment; conditions therefore vary naturally across microphones, devices, and noise profiles. No studio-grade normalisation or homogenisation is applied.
155
+
156
+ ## Transcription
157
+ Transcriptions are machine-generated using **OpenAI Whisper**, preserving its segmentation structure where applicable.
158
+
159
+ ## Dataset Statistics
160
+ Durations are given in hours. Counts reflect samples within each `(language, region, type_of_script)` partition.
161
+
162
+ ### English (China)
163
+ | type_of_script | duration_hrs | recordings | speakers |
164
+ |----------------|--------------|------------|----------|
165
+ | free_speech | 0.99 | 72 | 19 |
166
+ | keywords | 0.48 | 57 | 10 |
167
+ | monologues | 0.98 | 56 | 11 |
168
+
169
+ ### English (Nigeria)
170
+ | type_of_script | duration_hrs | recordings | speakers |
171
+ |----------------|--------------|------------|----------|
172
+ | free_speech | 0.98 | 75 | 65 |
173
+ | keywords | 0.99 | 141 | 101 |
174
+ | monologues | 0.99 | 49 | 32 |
175
+
176
+ ### English (United States)
177
+ | type_of_script | duration_hrs | recordings | speakers |
178
+ |----------------|--------------|------------|----------|
179
+ | free_speech | 0.99 | 80 | 35 |
180
+ | keywords | 0.99 | 119 | 40 |
181
+ | monologues | 0.99 | 78 | 27 |
182
+
183
+ ### German (Germany)
184
+ | type_of_script | duration_hrs | recordings | speakers |
185
+ |----------------|--------------|------------|----------|
186
+ | free_speech | 0.98 | 99 | 34 |
187
+ | keywords | 0.99 | 152 | 37 |
188
+ | monologues | 0.98 | 77 | 27 |
189
+
190
+ ### Spanish (Mexico)
191
+ | type_of_script | duration_hrs | recordings | speakers |
192
+ |----------------|--------------|------------|----------|
193
+ | free_speech | 0.98 | 90 | 6 |
194
+ | keywords | 0.05 | 6 | 2 |
195
+ | monologues | 0.70 | 45 | 9 |
196
+
197
+ ## File Structure
198
+ ```
199
+ data/
200
+ english_china/
201
+ train-0000.parquet
202
+ english_nigeria/
203
+ train-0000.parquet
204
+ english_united_states/
205
+ train-0000.parquet
206
+ german_germany/
207
+ train-0000.parquet
208
+ spanish_mexico/
209
+ train-0000.parquet
210
+ ```
211
+
212
+ Each parquet contains a mixture of **free_speech**, **keywords**, and **monologues**.
213
+
214
+ ## Feature Schema
215
+ All configurations share the same feature structure:
216
+
217
+ - id: integer (unique identifier)
218
+ - speaker_id: string (hashed or anonymized speaker ID)
219
+ - gender: string (speaker gender)
220
+ - ethnicity: string (speaker ethnicity)
221
+ - occupation: float (occupation or profession, stored as float per original schema)
222
+ - country_code: string (ISO 3166-1 alpha-2 code)
223
+ - birth_place: string (country or region of birth)
224
+ - mother_tongue: string (native language)
225
+ - dialect: string (regional dialect)
226
+ - year_of_birth: int (birth year, YYYY)
227
+ - years_at_birth_place: int (years lived at birth place)
228
+ - languages_data: string (serialized language–proficiency data)
229
+ - os: string (recording operating system)
230
+ - device: string (recording device type)
231
+ - browser: string (browser used if web-based)
232
+ - duration: float (seconds) (audio length)
233
+ - emotions: string (brace-formatted emotion labels)
234
+ - language: string (primary language of the recording)
235
+ - location: string (recording location category)
236
+ - noise_sources: string (brace-formatted background noise labels)
237
+ - script_id: int (script template identifier)
238
+ - type_of_script: string {free_speech, keywords, monologues} (script category)
239
+ - script: string (text intended to be spoken)
240
+ - transcript: string (Whisper-generated transcription)
241
+ - transcription_segments: string (serialized segmentation with timing and word data)
242
+ - audio: WAV audio object (associated audio file)
243
+
244
+ ## Licensing
245
+ Released under **CC BY-NC 4.0**.
246
+ Commercial use is not permitted. Attribution to **Silencio Network** is required for any publication or derivative dataset.
247
+
248
+ ## Intended Use
249
+ Suitable for:
250
+
251
+ - accent-conditioned ASR training
252
+ - multilingual speech recognition
253
+ - TTS voicebank generation
254
+ - speaker embedding and similarity evaluation
255
+ - robustness benchmarking
256
+ - keyword-spotting models
257
+ - segmentation and VAD evaluation
258
+
259
+ ## Limitations
260
+ - Transcripts are automatically generated. Errors may be present.
261
+ - Crowdsourced device diversity introduces variable noise levels.
262
+
263
+ ## Citation
264
+ ```
265
+ @dataset{silencio_network_speech_2025,
266
+ title = {Silencio Network Multilingual Accent Speech Corpus},
267
+ author = {Silencio Network},
268
+ year = {2025},
269
+ license = {CC BY-NC 4.0}
270
+ }
271
+ ```