Datasets:

Modalities:
Audio
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
amupd commited on
Commit
a7330fb
·
verified ·
1 Parent(s): 92b35fd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -32
README.md CHANGED
@@ -1,32 +1,49 @@
1
- ---
2
- license: mit
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- - split: augment
9
- path: data/augment-*
10
- - split: dev
11
- path: data/dev-*
12
- dataset_info:
13
- features:
14
- - name: audio
15
- dtype:
16
- audio:
17
- sampling_rate: 16000
18
- - name: transcription
19
- dtype: string
20
- splits:
21
- - name: train
22
- num_bytes: 8074974846.323382
23
- num_examples: 51517
24
- - name: augment
25
- num_bytes: 5469608715.1524
26
- num_examples: 6092
27
- - name: dev
28
- num_bytes: 131523760.6480159
29
- num_examples: 1580
30
- download_size: 16940862577
31
- dataset_size: 13676107322.123798
32
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: data/train-*
8
+ - split: augment
9
+ path: data/augment-*
10
+ - split: dev
11
+ path: data/dev-*
12
+ dataset_info:
13
+ features:
14
+ - name: audio
15
+ dtype:
16
+ audio:
17
+ sampling_rate: 16000
18
+ - name: transcription
19
+ dtype: string
20
+ splits:
21
+ - name: train
22
+ num_bytes: 8074974846.323382
23
+ num_examples: 51517
24
+ - name: augment
25
+ num_bytes: 5469608715.1524
26
+ num_examples: 6092
27
+ - name: dev
28
+ num_bytes: 131523760.6480159
29
+ num_examples: 1580
30
+ download_size: 16940862577
31
+ dataset_size: 13676107322.123798
32
+ ---
33
+
34
+
35
+
36
+ For training and developing your models in the **closed track**, we provide the following datasets, which are publicly available on Hugging Face: The datasets represent a wide range of Arabic varieties and recording conditions, with over 85K training sentences in total. The datasets consist of dialectal, modern standard, classical, and code-switched Arabic speech and transcriptions. All except the Mixat and ArzEn subset are diacritized.
37
+
38
+ | Dataset | Type | Diacritized | Train | Dev |
39
+ |-----------|------------------|:-----------:|:------:|:---:|
40
+ | MDASPC | Multi-dialectal | True | 60677 | >1K |
41
+ | TunSwitch | Dialectal, CS | True | 5212 | 165 |
42
+ | ClArTTS | CA | True | 9500 | 205 |
43
+ | ArVoice | MSA | True | 2507 | – |
44
+ | ArzEn | Dialectal, CS | False | 3344 | – |
45
+ | Mixat | Dialectal, CS | False | 3721 | – |
46
+
47
+ We removed samples containing fewer than 3 words and eliminated punctuations from all datasets to enhance consistency and quality. The resulted dataset contains 57K train and 1.5K for dev samples.
48
+
49
+ For the closed track, you may use the full train/dev sets or a subset of them (for example, you may wish to use the undiacritized subsets for semi-supervised training or rely only on the diacritized subsets). For the open track, you can use these resources and/or any other resources for training, as long as they don't overlap with the test sets.