Added a README with training instructions
Browse files
README.md
CHANGED
|
@@ -5,3 +5,47 @@ license: cc0-1.0
|
|
| 5 |
This is a processed LibriLight dataset ready for training the WhisperSpeech models.
|
| 6 |
|
| 7 |
See [https://github.com/collabora/WhisperSpeech](https://github.com/collabora/WhisperSpeech) for more details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
This is a processed LibriLight dataset ready for training the WhisperSpeech models.
|
| 6 |
|
| 7 |
See [https://github.com/collabora/WhisperSpeech](https://github.com/collabora/WhisperSpeech) for more details.
|
| 8 |
+
|
| 9 |
+
## Quick start
|
| 10 |
+
|
| 11 |
+
If you want to quickly train a basic WhisperSpeech model you can start by downloading the small subset:
|
| 12 |
+
|
| 13 |
+
```bash
|
| 14 |
+
# magic includes to download only the small and validation data splits and the accompanying config files
|
| 15 |
+
huggingface-cli download --repo-type dataset --include '*-small-*' '*small.dataset' '*-speakers*' --local-dir . -- collabora/whisperspeech-librilight
|
| 16 |
+
|
| 17 |
+
# download the semantic token model to extract the token embeddings from it
|
| 18 |
+
huggingface-cli download collabora/whisperspeech whisper-vq-stoks-medium-en+pl.model
|
| 19 |
+
|
| 20 |
+
# the T2S training invocation:
|
| 21 |
+
python3 -m whisperspeech.train_multi \
|
| 22 |
+
--task "t2s_up_wds_mlang_enclm base --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \
|
| 23 |
+
--batch-size 32 --accumulate-grad-batches 2 \
|
| 24 |
+
--epochs 2 --lr-schedule wsd \
|
| 25 |
+
--tunables="--cps_input --causal_encoder --warmup_steps=300 --encoder_depth_ratio=.25" \
|
| 26 |
+
--dataset-config=--vq_codes=513 \
|
| 27 |
+
--training-data @librilight-t2s-train-small.dataset \
|
| 28 |
+
--validation-data @librilight-t2s-val-common-speakers.dataset \
|
| 29 |
+
--validation-data @librilight-t2s-val-unseen-speakers.dataset \
|
| 30 |
+
--monitored-metric 'val_loss/dataloader_idx_0'
|
| 31 |
+
|
| 32 |
+
# the S2A training invocation:
|
| 33 |
+
python3 -m whisperspeech.train_multi \
|
| 34 |
+
--task "s2a_delar_mup_wds_mlang tiny --quantizers 4 --spk_width=192 --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \
|
| 35 |
+
--batch-size 48 \
|
| 36 |
+
--epochs 4 --lr-schedule wsd \
|
| 37 |
+
--tunables="--rope --warmup_steps=300" \
|
| 38 |
+
--dataset-config=--vq_codes=513 \
|
| 39 |
+
--training-data @librilight-s2a-train-small.dataset \
|
| 40 |
+
--validation-data @librilight-s2a-val-common-speakers.dataset \
|
| 41 |
+
--validation-data @librilight-s2a-val-unseen-speakers.dataset \
|
| 42 |
+
--monitored-metric 'val_loss/dataloader_idx_0'
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
The `--accumulate-grad-batches` option is set to get a good effective batch size a single 4090 GPU.
|
| 46 |
+
If you have multiple GPUs it will probably make sense to lower the batch size. For example 16 GPUs
|
| 47 |
+
with a batch size of 16 seem to be give good performance and fast training.
|
| 48 |
+
|
| 49 |
+
Because we use Maximum Update Parametrization, higher effective batch sizes always result in lower
|
| 50 |
+
losses and you don't need to adjust the learning rate. Unfortunately the effect is not linear so
|
| 51 |
+
there is an optimal batch size and there is little benefit to increase it further.
|