viks66 commited on
Commit
462ac5d
·
verified ·
1 Parent(s): 2d1f6ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -24
README.md CHANGED
@@ -1,24 +1,30 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
4
-
5
- This corpus contains paired data of speech, articulatory movements and phonemes. There are 38 speakers in the corpus, each with 460 utterances.
6
-
7
- The raw audio files are in audios.zip. The ema data and preprocessed data is stored in processed.zip. The processed data can be loaded with pytorch and has the following keys -
8
-
9
- <ul>
10
- <li>ema_raw : The raw ema data
11
-
12
- <li>ema_clipped : The ema data after trimming using being-end time stamps
13
-
14
- <li>ema_trimmed_and_normalised_with_6_articulators: The ema data after trimming using being-end time stamps, followed by articulatory specifc standardisation
15
-
16
- <li>mfcc: 13-dim MFCC computed on trimmed audio
17
-
18
- <li>phonemes: The phonemes uttered for the audio
19
-
20
- <li>durations: Duration values for each phoneme
21
-
22
- <li>begin_end: Begin end time stamps to trim the audio / raw ema
23
- </ul>
24
- To use this data for tasks such as acoustic to articulatory inversion (AAI), you can use ema_trimmed_and_normalised_with_6_articulators and mfcc as the data.
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ This corpus contains paired data of speech, articulatory movements and phonemes. There are 38 speakers in the corpus, each with 460 utterances.
6
+
7
+ The raw audio files are in audios.zip. The ema data and preprocessed data is stored in processed.zip. The processed data can be loaded with pytorch and has the following keys -
8
+
9
+ <ul>
10
+ <li>ema_raw : The raw ema data
11
+
12
+ <li>ema_clipped : The ema data after trimming using being-end time stamps
13
+
14
+ <li>ema_trimmed_and_normalised_with_6_articulators: The ema data after trimming using being-end time stamps, followed by articulatory specifc standardisation
15
+
16
+ <li>mfcc: 13-dim MFCC computed on trimmed audio
17
+
18
+ <li>phonemes: The phonemes uttered for the audio
19
+
20
+ <li>durations: Duration values for each phoneme
21
+
22
+ <li>begin_end: Begin end time stamps to trim the audio / raw ema
23
+ </ul>
24
+ To use this data for tasks such as acoustic to articulatory inversion (AAI), you can use ema_trimmed_and_normalised_with_6_articulators and mfcc as the data.
25
+
26
+ ___
27
+
28
+ If you have used this dataset in your work, use the following refrence to cite the dataset -
29
+
30
+ ```Bandekar, J., Udupa, S., Ghosh, P.K. (2024) Articulatory synthesis using representations learnt through phonetic label-aware contrastive loss. Proc. Interspeech 2024, 427-431, doi: 10.21437/Interspeech.2024-1756```