Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,24 +18,24 @@ language:
|
|
| 18 |
# HypothesesParadise
|
| 19 |
This repo releases the Robust HyPoradise dataset in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition."
|
| 20 |
|
| 21 |
-
GitHub
|
| 22 |
|
| 23 |
-
Model
|
| 24 |
|
| 25 |
-
Data
|
| 26 |
|
| 27 |
|
| 28 |
-
UPDATE (Apr-18-2024)
|
| 29 |
Considering the file size, the uploaded training data does not contain the speech features (vast size).
|
| 30 |
Alternatively, we have provided a script named `add_speech_feats_to_train_data.py` to generate them from raw speech (.wav).
|
| 31 |
You need to specify the raw speech path from utterance id in the script.
|
| 32 |
Here are the available speech data: [CHiME-4](https://entuedu-my.sharepoint.com/:f:/g/personal/yuchen005_e_ntu_edu_sg/EuLgMQbjrIJHk7dKPkjcDMIB4SYgXKKP8VBxyiZk3qgdgA),
|
| 33 |
[VB-DEMAND](https://datashare.ed.ac.uk/handle/10283/2791), [LS-FreeSound](https://github.com/archiki/Robust-E2E-ASR), [NOIZEUS](https://ecs.utdallas.edu/loizou/speech/noizeus/).
|
| 34 |
|
| 35 |
-
IMPORTANT
|
| 36 |
- Modified the [model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) `x = (x + self.positional_embedding).to(x.dtype)` to be `x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)`
|
| 37 |
|
| 38 |
-
UPDATE (Apr-29-2024)
|
| 39 |
We also release two necessary packages for generation, one is the `jiwer` package that is locally imported in `generate_robust_hp.py`, another one is the whisper decoding script `decoding.py` that should be put under locally installed whisper directory `<your-path>/whisper/whisper`.
|
| 40 |
|
| 41 |
|
|
|
|
| 18 |
# HypothesesParadise
|
| 19 |
This repo releases the Robust HyPoradise dataset in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition."
|
| 20 |
|
| 21 |
+
**GitHub:** https://github.com/YUCHEN005/RobustGER
|
| 22 |
|
| 23 |
+
**Model:** https://huggingface.co/PeacefulData/RobustGER
|
| 24 |
|
| 25 |
+
**Data:** This repo
|
| 26 |
|
| 27 |
|
| 28 |
+
**UPDATE (Apr-18-2024):** We have released the training data, which follows the same format as test data.
|
| 29 |
Considering the file size, the uploaded training data does not contain the speech features (vast size).
|
| 30 |
Alternatively, we have provided a script named `add_speech_feats_to_train_data.py` to generate them from raw speech (.wav).
|
| 31 |
You need to specify the raw speech path from utterance id in the script.
|
| 32 |
Here are the available speech data: [CHiME-4](https://entuedu-my.sharepoint.com/:f:/g/personal/yuchen005_e_ntu_edu_sg/EuLgMQbjrIJHk7dKPkjcDMIB4SYgXKKP8VBxyiZk3qgdgA),
|
| 33 |
[VB-DEMAND](https://datashare.ed.ac.uk/handle/10283/2791), [LS-FreeSound](https://github.com/archiki/Robust-E2E-ASR), [NOIZEUS](https://ecs.utdallas.edu/loizou/speech/noizeus/).
|
| 34 |
|
| 35 |
+
**IMPORTANT:** The vast speech feature size mentioned above is because Whisper requires a fix input length of 30s that is too long. Please do the follwing step before running data generation:
|
| 36 |
- Modified the [model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) `x = (x + self.positional_embedding).to(x.dtype)` to be `x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)`
|
| 37 |
|
| 38 |
+
**UPDATE (Apr-29-2024):** To support customization, We release the script `generate_robust_hp.py` for users to generate train/test data from their own ASR datasets.
|
| 39 |
We also release two necessary packages for generation, one is the `jiwer` package that is locally imported in `generate_robust_hp.py`, another one is the whisper decoding script `decoding.py` that should be put under locally installed whisper directory `<your-path>/whisper/whisper`.
|
| 40 |
|
| 41 |
|