Update README.md
Browse files
README.md
CHANGED
|
@@ -34,9 +34,9 @@ tags:
|
|
| 34 |
- asr
|
| 35 |
---
|
| 36 |
|
| 37 |
-
This dataset combined [google/fleurs](https://huggingface.co/datasets/google/fleurs)
|
| 38 |
Severals processes are executed:
|
| 39 |
-
1. clean up [seanghay/khmer_mpwt_speech](): manually correct wrong transcriptions over 2058 rows
|
| 40 |
2. normalize transcription: remove invisible white space; process `ៗ`, numbers, currencies, date into khmer text; and separate each word by space
|
| 41 |
3. filter out texts whose number of token ids are more than 448: use tokenizer of Whisper-Small to encode text and filter out sequences longer than 448
|
| 42 |
4. filter out audio with length longer than 30 seconds
|
|
|
|
| 34 |
- asr
|
| 35 |
---
|
| 36 |
|
| 37 |
+
This dataset combined [google/fleurs](https://huggingface.co/datasets/google/fleurs), [openslr/openslr42](https://huggingface.co/datasets/openslr/openslr), and cleaned [seanghay/khmer_mpwt_speech](https://huggingface.co/datasets/seanghay/khmer_mpwt_speech).
|
| 38 |
Severals processes are executed:
|
| 39 |
+
1. clean up [seanghay/khmer_mpwt_speech](https://huggingface.co/datasets/seanghay/khmer_mpwt_speech): manually correct wrong transcriptions over 2058 rows
|
| 40 |
2. normalize transcription: remove invisible white space; process `ៗ`, numbers, currencies, date into khmer text; and separate each word by space
|
| 41 |
3. filter out texts whose number of token ids are more than 448: use tokenizer of Whisper-Small to encode text and filter out sequences longer than 448
|
| 42 |
4. filter out audio with length longer than 30 seconds
|