S2T Example: ST on CoVoST
We replicate the experiments in CoVoST 2 and Massively Multilingual Speech-to-Text Translation (Wang et al., 2020).
Data Preparation
Download and unpack Common Voice v4 to a path
${COVOST_ROOT}/${SOURCE_LANG_ID}, then preprocess it with
# additional Python packages for S2T data processing/model training
pip install pandas torchaudio sentencepiece
# En ASR
python examples/speech_to_text/prep_covost_data.py \
--data-root ${COVOST_ROOT} --vocab-type char --src-lang en
# ST
python examples/speech_to_text/prep_covost_data.py \
--data-root ${COVOST_ROOT} --vocab-type char \
--src-lang fr --tgt-lang en
The generated files (manifest, features, vocabulary and data configuration) will be added to
${COVOST_ROOT}/${SOURCE_LANG_ID}.
Download our vocabulary files if you want to use our pre-trained models:
ASR
Training
We train an En ASR model for encoder pre-training some of the ST models.
fairseq-train ${COVOST_ROOT}/en \
--config-yaml config_asr_en.yaml --train-subset train_asr_en --valid-subset dev_asr_en \
--save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 50000 --max-update 60000 \
--task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
--report-accuracy --arch s2t_transformer_s --dropout 0.15 --optimizer adam --lr 2e-3 \
--lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \
--attn-type None --pos-enc-type ${POS_ENC_TYPE}
where ASR_SAVE_DIR is the checkpoint root path and POS_ENC_TYPE refers to positional encoding to be used in the conformer encoder.
Set it to abs, rope or rel_pos to use the absolute positional encoding, rotary positional encoding or relative positional encoding in the conformer layer respectively.
Transformer encoder only supports absolute positional encoding and by default, the transformer encoder will be used.
To switch to conformer, set --attn-type espnet and --POS_ENC_TYPE. We set --update-freq 8 to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU.
Inference & Evaluation
CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
python scripts/average_checkpoints.py \
--inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \
--output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}"
fairseq-generate ${COVOST_ROOT}/en \
--config-yaml config_asr_en.yaml --gen-subset test_asr_en --task speech_to_text \
--path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \
--scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct
Results
| --arch | --pos-enc-type | Params | En | Model |
|---|---|---|---|---|
| s2t_transformer_s | - | 31M | 25.6 | Download |
| s2t_conformer | rel_pos | 42.9M | 23.18 | Download |
| s2t_conformer | rope | 42.1M | 23.8 | Download |
| s2t_conformer | abs | 42.1M | 23.8 | Download |
ST
Training
Fr-En as example:
fairseq-train ${COVOST_ROOT}/fr \
--config-yaml config_st_fr_en.yaml --train-subset train_st_fr_en --valid-subset dev_st_fr_en \
--save-dir ${ST_SAVE_DIR} --num-workers 4 --max-update 30000 --max-tokens 40000 \ # --max-tokens 50000 for en-*
--task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \
--arch s2t_transformer_s --encoder-freezing-updates 1000 --optimizer adam --lr 2e-3 \
--lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \
--attn-type None --pos-enc-type ${POS_ENC_TYPE} \
--load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}
where ST_SAVE_DIR is the checkpoint root path and POS_ENC_TYPE refers to positional encoding to be used in the conformer encoder.
Set it to abs, rope or rel_pos to use the absolute positional encoding, rotary positional encoding or relative positional encoding in the conformer layer respectively.
Transformer encoder only supports absolute positional encoding and by default, the transformer encoder will be used.
To switch to conformer, set --attn-type espnet and --POS_ENC_TYPE. Optionally load the pre-trained En ASR encoder for faster training and better
performance: --load-pretrained-encoder-from <ASR checkpoint path>. We set --update-freq 8 to simulate 8 GPUs with 1 GPU.
You may want to update it accordingly when using more than 1 GPU.
Inference & Evaluation
Average the last 10 checkpoints and evaluate on test split:
CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt
python scripts/average_checkpoints.py \
--inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \
--output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}"
fairseq-generate ${COVOST_ROOT}/fr \
--config-yaml config_st_fr_en.yaml --gen-subset test_st_fr_en --task speech_to_text \
--path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \
--max-tokens 50000 --beam 5 --scoring sacrebleu
Interactive Decoding
Launch the interactive console via
fairseq-interactive ${COVOST_ROOT}/fr --config-yaml config_st_fr_en.yaml \
--task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \
--max-tokens 50000 --beam 5
Type in WAV/FLAC/OGG audio paths (one per line) after the prompt.
Results
| --arch | --pos-enc-type | Params | ASR PT | Fr-En | De-En | Es-En | Ca-En | En-De | En-Ca | En-Fa | En-Et | Model |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| s2t_transformer | - | 31M | Yes | 27.2 | 17.7 | 23.1 | 19.3 | 16.1 | 21.6 | 12.9 | 12.8 | (<-Download) |
| s2t_conformer | rel_pos | 42.9M | No | 28.32 | 18.21 | 25.98 | 21.13 | 20.37 | 25.89 | 15.59 | 14.49 | (<-Download) |
| s2t_conformer | rel_pos | 42.9M | Yes | 27.15 | 18.22 | 25.14 | 21.68 | 20.35 | 25.92 | 15.76 | 16.52 | (<-Download) |
| s2t_conformer | rope | 42.1M | No | 27.61 | 17.6 | 24.91 | 20.78 | 19.7 | 25.13 | 15.22 | 15.87 | (<-Download) |
| s2t_conformer | rope | 42.1M | Yes | 26.99 | 17.71 | 24.24 | 21.24 | 19.9 | 25.25 | 15.58 | 15.97 | (<-Download) |
| s2t_conformer | abs | 42.1M | No | 27.45 | 17.25 | 25.01 | 20.26 | 19.86 | 25.25 | 15.46 | 15.81 | (<-Download) |
| s2t_conforme | abs | 42.1M | Yes | 26.52 | 17.37 | 25.40 | 20.45 | 19.57 | 25.40 | 15.17 | 15.83 | (<-Download) |