| | --- |
| | license: cc0-1.0 |
| | --- |
| | |
| | This is a processed LibriLight dataset ready for training the WhisperSpeech models. |
| |
|
| | See [https://github.com/collabora/WhisperSpeech](https://github.com/collabora/WhisperSpeech) for more details. |
| |
|
| | ## Quick start |
| |
|
| | If you want to quickly train a basic WhisperSpeech model you can start by downloading the small subset: |
| |
|
| | ```bash |
| | # magic includes to download only the small and validation data splits and the accompanying config files |
| | huggingface-cli download --repo-type dataset --include '*-small-*' '*small.dataset' '*-speakers*' --local-dir . -- collabora/whisperspeech-librilight |
| | |
| | # download the semantic token model to extract the token embeddings from it |
| | huggingface-cli download collabora/whisperspeech whisper-vq-stoks-medium-en+pl.model |
| | |
| | # the T2S training invocation: |
| | python3 -m whisperspeech.train_multi \ |
| | --task "t2s_up_wds_mlang_enclm base --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \ |
| | --batch-size 32 --accumulate-grad-batches 2 \ |
| | --epochs 2 --lr-schedule wsd \ |
| | --tunables="--cps_input --causal_encoder --warmup_steps=300 --encoder_depth_ratio=.25" \ |
| | --dataset-config=--vq_codes=513 \ |
| | --training-data @librilight-t2s-train-small.dataset \ |
| | --validation-data @librilight-t2s-val-common-speakers.dataset \ |
| | --validation-data @librilight-t2s-val-unseen-speakers.dataset \ |
| | --monitored-metric 'val_loss/dataloader_idx_0' |
| | |
| | # the S2A training invocation: |
| | python3 -m whisperspeech.train_multi \ |
| | --task "s2a_delar_mup_wds_mlang tiny --quantizers 4 --spk_width=192 --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \ |
| | --batch-size 48 \ |
| | --epochs 4 --lr-schedule wsd \ |
| | --tunables="--rope --warmup_steps=300" \ |
| | --dataset-config=--vq_codes=513 \ |
| | --training-data @librilight-s2a-train-small.dataset \ |
| | --validation-data @librilight-s2a-val-common-speakers.dataset \ |
| | --validation-data @librilight-s2a-val-unseen-speakers.dataset \ |
| | --monitored-metric 'val_loss/dataloader_idx_0' |
| | ``` |
| |
|
| | The `--accumulate-grad-batches` option is set to get a good effective batch size a single 4090 GPU. |
| | If you have multiple GPUs it will probably make sense to lower the batch size. For example 16 GPUs |
| | with a batch size of 16 seem to be give good performance and fast training. |
| |
|
| | Because we use Maximum Update Parametrization, higher effective batch sizes always result in lower |
| | losses and you don't need to adjust the learning rate. Unfortunately the effect is not linear so |
| | there is an optimal batch size and there is little benefit to increase it further. |
| |
|