| language: | |
| - en | |
| tags: | |
| - esc | |
| datasets: | |
| - librispeech | |
| To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: | |
| ```python | |
| #!/usr/bin/env bash | |
| python run_flax_speech_recognition_ctc.py \ | |
| --model_name_or_path="esc-benchmark/wav2vec2-ctc-pretrained" \ | |
| --tokenizer_name="wav2vec2-ctc-librispeech-tokenizer" \ | |
| --dataset_name="esc-benchmark/esc-datasets" \ | |
| --dataset_config_name="librispeech" \ | |
| --output_dir="./" \ | |
| --wandb_project="wav2vec2-ctc" \ | |
| --wandb_name="wav2vec2-ctc-librispeech" \ | |
| --max_steps="50000" \ | |
| --save_steps="10000" \ | |
| --eval_steps="10000" \ | |
| --learning_rate="3e-4" \ | |
| --logging_steps="25" \ | |
| --warmup_steps="5000" \ | |
| --preprocessing_num_workers="1" \ | |
| --hidden_dropout="0.2" \ | |
| --activation_dropout="0.2" \ | |
| --feat_proj_dropout="0.2" \ | |
| --do_train \ | |
| --do_eval \ | |
| --do_predict \ | |
| --overwrite_output_dir \ | |
| --gradient_checkpointing \ | |
| --freeze_feature_encoder \ | |
| --push_to_hub \ | |
| --use_auth_token | |
| ``` | |