ales commited on
Commit
f8f168e
·
1 Parent(s): 3b23b6a

Training in progress, step 210

Browse files
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f9d8969563a430c539e7832a0f581b9fc1ecc43821015672854439cc52fdd59
3
  size 151098921
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f56ce3f19074d87c3d8e9a7b783e7309350cd8a1fc62c73f9385a13a9c9157c
3
  size 151098921
runs/Dec13_12-14-07_d7f040c448a8/1670933661.007405/events.out.tfevents.1670933661.d7f040c448a8.15037.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cfcfaad843999a1b5c7b7f78449a77aceb6f30d65d780ac91c310870f3a1f4e
3
+ size 5883
runs/Dec13_12-14-07_d7f040c448a8/events.out.tfevents.1670933660.d7f040c448a8.15037.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ff16a5d6ea1bba1706d86a4c1dd60b17f64b22f309363f616133efa89438979
3
+ size 4744
src/readme.md DELETED
@@ -1,106 +0,0 @@
1
- ## Description
2
-
3
- Fine-tuning [OpenAI Whisper](https://github.com/openai/whisper) model for Belarusian language during
4
- [Whisper fine-tuning Event](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event)
5
- hosted by HuggingFace x Lambda.
6
-
7
- The code in this repository is a modified version of code from
8
- [Whisper fine-tuning Event](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event) repo.
9
-
10
- ## Fine-tuning todos:
11
- * perform evaluation of fine-tuned model on CommonVoice test set
12
- * check exact sizes of train, eval, test sets of CommonVoice 11
13
-
14
- ## Resuming training from exising checkpoint
15
- When resuming training from existing checkpoint:
16
- * learning rate gets reset if passing same parameter value to training script as in previour run.<br>
17
- need to provide learning rate from the last step of previous run to continue
18
- training in a correct way.<br>
19
- however even if passing learning rate from the last step, in the new run it has different value than expected
20
- (probably because of warmup).
21
- * it's unclear whether decision on saving current model
22
- is made by comparing current metrics with metrics of the best checkpoint. I guess model with worse performance
23
- will not overwrite best model checkpoint already exising in the output dir, but need to double check.
24
- * we can set `ignore_data_skip=True` Training argument not to
25
- skip data items already passed to a model - that will save time on data loads.
26
- * it's unclear whether order of input items in the train set (that is shuffled) will be the same
27
- across multiple reruns - i.e. it's unclear whether sampling is the same across reruns.
28
- * if the sampling is the same across reruns, `ignore_data_skip=True` will lead to same items been passed to a model
29
- in current run. it's OK if previous run ended with large step value on the last epoch.
30
- if not, the same elements from the same epoch will be passed to a model again.
31
-
32
- ## Questions:
33
- * What checkpoint (best, I guess) is saved in the `output_dir`?
34
- How is it overwritten when resuming training from existing checkpoint?
35
-
36
- ### Prepended tokens
37
- * Why are there following lines in Data Collator?
38
- ```python
39
- # if bos token is appended in previous tokenization step,
40
- # cut bos token here as it's append later anyways
41
- if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item():
42
- labels = labels[:, 1:]
43
- ```
44
- * `tokenizer.bos_token_id` vs `model.config.decoder_start_token_id`.<br>
45
- which one to pass to Data Collator as `decoder_start_token_id` parameter?
46
- * Answer:
47
- * In this case, the two are equivalent. You can verify this:
48
- ```python
49
- print(tokenizer.bos_token_id)
50
- print(model.config.decoder_start_token_id)
51
- ```
52
-
53
- * Print Output:
54
- ```
55
- <|startoftranscript|>
56
- <|startoftranscript|>
57
- ```
58
-
59
- * Technically speaking, the decoder_start_token_id is the correct convention here. Before starting generating any tokens, we initialise the generate method with a starting token, which is the decoder_start_token_id.
60
- See: https://huggingface.co/blog/how-to-generate. The decoder_start_token_id corresponds to the initial context word sequence, and is the zero'th token generated.
61
-
62
- * We remove this token from the encoded labels in the data collator because we always set the zero'th generated token to the decoder_start_token_id. If we leave the decoder_start_token_id as part of the label sequence, then we'll predict the decoder_start_token_id as the zero'th token, and again as the first token! Because we're always forcing it as the zero'th token, we don't need to predict it as the first token, and so we remove it from the target lables
63
-
64
- * These tokens are not forced in the generation process, and so we don't cut them in the data collator. We need to provide them to the model as target labels so that the model can learn the correct tasks from our data
65
-
66
- * The tokens correspond to the audio language, task (translate or transcribe) and whether to predict timestamps
67
-
68
- * We need to tell the model what language the audio corresponds to and what task it's performing during fine-tuning. This way, it learns what audio corresponds to what language, and the difference between transcribing audio vs translating it
69
-
70
- ## Notes:
71
- * using CommonVoice 11 dataset in a streaming way.<br>
72
- use `streaming=True` for train & validation & test.<br>
73
- as an alternative, we can use `streaming=False` for validation & test sets to save time on data processing.
74
- but the size of validation and test sets are unknown (need to check).
75
- it's likely they are going to be large - thus pre-download of these sets might not reduce
76
- overall fine-tuning time compared to streaming mode.
77
- * size of train set is ~370'000 audiofiles. if using `batch_size=64`, then
78
- 1 epoch will have ~5782 steps. <br>
79
- Because of `--eval_steps="1000"` will use `--max_steps="6000"` instead of `--max_steps="5800"`
80
- to have evaluation metrics computed in the end of training.
81
- * if using Google Colab, need to execute `sudo chmod -R 777 .git` inside hf repo to
82
- to set right permissions to be able to push trained models to HuggingFace Hub
83
- * Whispers BasicTextNormalizer splits words containing apostrophe:
84
- ```python
85
- > from transformers.models.whisper.english_normalizer import BasicTextNormalizer
86
- > normalizer = BasicTextNormalizer()
87
- > normalizer("раз'яднаць")
88
- 'раз яднаць'
89
- ```
90
- * That's why `BelarusianTextNormalizer` (edited version of `BasicTextNormalizer`) was added to training script:
91
- ```python
92
- > from run_speech_recognition_seq2seq_streaming import BelarusianTextNormalizer
93
- > normalizer_be = BelarusianTextNormalizer()
94
- > normalizer_be("раз'яднаць")
95
- "раз'яднаць"
96
- ```
97
- * Need to set `use_cache` to False since we're using gradient checkpointing, and the two are incompatible
98
- * Default Linear scheduler is used
99
- * Default Adam optimizer is used
100
- * To save memory (and increase either model or batch_size) can experiment with:
101
- * using Adafactor instead of Adam.
102
- Adam requires two optimiser params per one model param, but Adafactor uses only one.
103
- > A word of caution: Adafactor is untested for fine-tuning Whisper,
104
- so we are unsure sure how Adafactor performance compares to Adam!
105
- * using Adam 8bit from `bitsandbytes` module.
106
- need to provide `optim="adamw_bnb_8bit"` param to `Seq2SeqTrainingArguments`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/requirements.txt DELETED
@@ -1,9 +0,0 @@
1
- torch>=1.7
2
- torchaudio
3
- git+https://github.com/huggingface/transformers
4
- git+https://github.com/huggingface/datasets
5
- librosa
6
- jiwer
7
- evaluate>=0.3.0
8
- more-itertools
9
- tensorboard
 
 
 
 
 
 
 
 
 
 
src/run_debug.sh CHANGED
@@ -7,7 +7,7 @@ python src/run_speech_recognition_seq2seq_streaming.py \
7
  --eval_split_name="validation" \
8
  --model_index_name="Whisper Tiny Belarusian" \
9
  \
10
- --max_steps="200" \
11
  --max_eval_samples="64" \
12
  --output_dir="./" \
13
  --per_device_train_batch_size="32" \
@@ -34,7 +34,6 @@ python src/run_speech_recognition_seq2seq_streaming.py \
34
  \
35
  --do_train \
36
  --do_eval \
37
- --resume_from_checkpoint="." \
38
  --ignore_data_skip \
39
  --predict_with_generate \
40
  --do_normalize_eval \
 
7
  --eval_split_name="validation" \
8
  --model_index_name="Whisper Tiny Belarusian" \
9
  \
10
+ --max_steps="300" \
11
  --max_eval_samples="64" \
12
  --output_dir="./" \
13
  --per_device_train_batch_size="32" \
 
34
  \
35
  --do_train \
36
  --do_eval \
 
37
  --ignore_data_skip \
38
  --predict_with_generate \
39
  --do_normalize_eval \
src/run_speech_recognition_seq2seq_streaming.py CHANGED
@@ -368,28 +368,42 @@ def main():
368
  logger.info(f'output_dir already exists. will try to load last checkpoint.')
369
 
370
  last_checkpoint = get_last_checkpoint(training_args.output_dir)
371
- if last_checkpoint is None:
372
- logger.info('last_checkpoint is None. will try to read from the model saved in the root of output_dir.')
373
-
374
- dir_content = os.listdir(training_args.output_dir)
375
- if len(dir_content) == 0:
376
- logger.info('output_dir is empty. can not resume training. will start training from scratch.')
 
 
 
 
 
 
 
 
 
 
377
  else:
378
- model_fn = 'pytorch_model.bin'
379
- if model_fn in dir_content:
380
- logger.info(f'found {model_fn} inside output_dir. '
381
- f'will continue training treating output_dir as a last checkpoint.')
382
- last_checkpoint = training_args.output_dir
 
383
  else:
384
- raise ValueError(
385
- f"Output directory ({training_args.output_dir}) already exists and is not empty. "
386
- "Use --overwrite_output_dir to overcome."
387
- )
388
- elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
389
- logger.info(
390
- f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
391
- "the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
392
- )
 
 
 
393
 
394
  # Set seed before initializing model.
395
  set_seed(training_args.seed)
 
368
  logger.info(f'output_dir already exists. will try to load last checkpoint.')
369
 
370
  last_checkpoint = get_last_checkpoint(training_args.output_dir)
371
+ if last_checkpoint is not None:
372
+ if training_args.resume_from_checkpoint is None:
373
+ logger.info(
374
+ f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
375
+ "the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
376
+ )
377
+ else:
378
+ logger.info(f'Last checkpoint found at: {last_checkpoint}. Will ignore it and resume training '
379
+ f'from passed resume_from_checkpoint param: {training_args.resume_from_checkpoint}')
380
+ assert os.path.isdir(training_args.resume_from_checkpoint)
381
+ else:
382
+ logger.info('last_checkpoint is None. will try to read from training_args.resume_from_checkpoint')
383
+
384
+ if training_args.resume_from_checkpoint is not None and os.path.isdir(training_args.resume_from_checkpoint):
385
+ logger.info(f'Will resume training from passed resume_from_checkpoint param: '
386
+ f'{training_args.resume_from_checkpoint}')
387
  else:
388
+ logger.info('last_checkpoint is None. resume_from_checkpoint is either None or not existing dir. '
389
+ 'will try to read from the model saved in the root of output_dir.')
390
+
391
+ dir_content = os.listdir(training_args.output_dir)
392
+ if len(dir_content) == 0:
393
+ logger.info('output_dir is empty. will start training from scratch.')
394
  else:
395
+ model_fn = 'pytorch_model.bin'
396
+ if model_fn in dir_content:
397
+ logger.info(f'found {model_fn} inside output_dir. '
398
+ f'will continue training treating output_dir as a last checkpoint.')
399
+ last_checkpoint = training_args.output_dir
400
+ else:
401
+ raise ValueError(
402
+ f'Could not find last_checkpoint, resume_from_checkpoint is either None '
403
+ 'or not existing dir, output_dir is non-empty but does not contain a model.'
404
+ 'Use --overwrite_output_dir to overcome.'
405
+ )
406
+
407
 
408
  # Set seed before initializing model.
409
  set_seed(training_args.seed)
train.log CHANGED
@@ -97,3 +97,4 @@
97
  eval_samples_per_second = 3.853
98
  eval_steps_per_second = 0.12
99
  eval_wer = 54.5788
 
 
97
  eval_samples_per_second = 3.853
98
  eval_steps_per_second = 0.12
99
  eval_wer = 54.5788
100
+ {'loss': 0.1922, 'learning_rate': 8.033333333333335e-06, 'epoch': 0.03}
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:acefe54daf8bd4fb84bd849519c1fad1c02c35d7d3b1b1b321e84097615195a8
3
  size 3643
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba3e5752f0bcb0a1a12ea56a06813c45fc9e560ac738b4458d08b4141a6bb434
3
  size 3643