Priyanship commited on
Commit
05b7496
·
verified ·
1 Parent(s): 263641c

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
@@ -9,18 +10,17 @@ model-index:
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
  should probably proofread and complete it, then remove this comment. -->
11
 
12
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/priyanshipal/huggingface/runs/upry9j53)
13
  # eval_cache_hindi_only
14
 
15
  This model was trained from scratch on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - eval_loss: 2.2188
18
- - eval_model_preparation_time: 0.0045
19
  - eval_cer: 0.4569
20
  - eval_wer: 0.5264
21
- - eval_runtime: 31.2077
22
- - eval_samples_per_second: 18.329
23
- - eval_steps_per_second: 1.154
24
  - step: 0
25
 
26
  ## Model description
@@ -54,7 +54,7 @@ The following hyperparameters were used during training:
54
 
55
  ### Framework versions
56
 
57
- - Transformers 4.43.1
58
  - Pytorch 2.4.0
59
  - Datasets 2.20.0
60
- - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  tags:
4
  - generated_from_trainer
5
  model-index:
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
 
13
  # eval_cache_hindi_only
14
 
15
  This model was trained from scratch on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - eval_loss: 2.2188
18
+ - eval_model_preparation_time: 0.0044
19
  - eval_cer: 0.4569
20
  - eval_wer: 0.5264
21
+ - eval_runtime: 39.8784
22
+ - eval_samples_per_second: 14.344
23
+ - eval_steps_per_second: 0.903
24
  - step: 0
25
 
26
  ## Model description
 
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.45.2
58
  - Pytorch 2.4.0
59
  - Datasets 2.20.0
60
+ - Tokenizers 0.20.1
all_results.json CHANGED
@@ -1,10 +1,10 @@
1
  {
2
  "eval_cer": 0.45689757252812313,
3
  "eval_loss": 2.218759059906006,
4
- "eval_model_preparation_time": 0.0045,
5
- "eval_runtime": 31.2077,
6
  "eval_samples": 572,
7
- "eval_samples_per_second": 18.329,
8
- "eval_steps_per_second": 1.154,
9
  "eval_wer": 0.5264004680415387
10
  }
 
1
  {
2
  "eval_cer": 0.45689757252812313,
3
  "eval_loss": 2.218759059906006,
4
+ "eval_model_preparation_time": 0.0044,
5
+ "eval_runtime": 39.8784,
6
  "eval_samples": 572,
7
+ "eval_samples_per_second": 14.344,
8
+ "eval_steps_per_second": 0.903,
9
  "eval_wer": 0.5264004680415387
10
  }
config.json CHANGED
@@ -102,7 +102,7 @@
102
  1
103
  ],
104
  "torch_dtype": "float32",
105
- "transformers_version": "4.43.1",
106
  "use_weighted_layer_sum": false,
107
  "vocab_size": 151,
108
  "xvector_output_dim": 512
 
102
  1
103
  ],
104
  "torch_dtype": "float32",
105
+ "transformers_version": "4.45.2",
106
  "use_weighted_layer_sum": false,
107
  "vocab_size": 151,
108
  "xvector_output_dim": 512
eval_results.json CHANGED
@@ -1,10 +1,10 @@
1
  {
2
  "eval_cer": 0.45689757252812313,
3
  "eval_loss": 2.218759059906006,
4
- "eval_model_preparation_time": 0.0045,
5
- "eval_runtime": 31.2077,
6
  "eval_samples": 572,
7
- "eval_samples_per_second": 18.329,
8
- "eval_steps_per_second": 1.154,
9
  "eval_wer": 0.5264004680415387
10
  }
 
1
  {
2
  "eval_cer": 0.45689757252812313,
3
  "eval_loss": 2.218759059906006,
4
+ "eval_model_preparation_time": 0.0044,
5
+ "eval_runtime": 39.8784,
6
  "eval_samples": 572,
7
+ "eval_samples_per_second": 14.344,
8
+ "eval_steps_per_second": 0.903,
9
  "eval_wer": 0.5264004680415387
10
  }
evalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_2144517.out CHANGED
@@ -307,3 +307,45 @@ last prediction string लता द्वारा अनुवादित ह
307
 
308
 
309
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
307
 
308
 
309
 
310
+ wandb: - 0.005 MB of 0.005 MB uploaded
311
+ wandb: Run history:
312
+ wandb: eval/cer ▁
313
+ wandb: eval/loss ▁
314
+ wandb: eval/model_preparation_time ▁
315
+ wandb: eval/runtime ▁
316
+ wandb: eval/samples_per_second ▁
317
+ wandb: eval/steps_per_second ▁
318
+ wandb: eval/wer ▁
319
+ wandb: eval_cer ▁
320
+ wandb: eval_loss ▁
321
+ wandb: eval_model_preparation_time ▁
322
+ wandb: eval_runtime ▁
323
+ wandb: eval_samples ▁
324
+ wandb: eval_samples_per_second ▁
325
+ wandb: eval_steps_per_second ▁
326
+ wandb: eval_wer ▁
327
+ wandb: train/global_step ▁▁
328
+ wandb:
329
+ wandb: Run summary:
330
+ wandb: eval/cer 0.4569
331
+ wandb: eval/loss 2.21876
332
+ wandb: eval/model_preparation_time 0.0045
333
+ wandb: eval/runtime 31.2077
334
+ wandb: eval/samples_per_second 18.329
335
+ wandb: eval/steps_per_second 1.154
336
+ wandb: eval/wer 0.5264
337
+ wandb: eval_cer 0.4569
338
+ wandb: eval_loss 2.21876
339
+ wandb: eval_model_preparation_time 0.0045
340
+ wandb: eval_runtime 31.2077
341
+ wandb: eval_samples 572
342
+ wandb: eval_samples_per_second 18.329
343
+ wandb: eval_steps_per_second 1.154
344
+ wandb: eval_wer 0.5264
345
+ wandb: train/global_step 0
346
+ wandb:
347
+ wandb: 🚀 View run eval_pd20000_w500_s300_shuff100_hinglish at: https://wandb.ai/priyanshipal/huggingface/runs/upry9j53
348
+ wandb: ⭐️ View project at: https://wandb.ai/priyanshipal/huggingface
349
+ wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
350
+ wandb: Find logs at: ./wandb/run-20240822_174142-upry9j53/logs
351
+ wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d8d438794391e3bf26a9530b1ace42edaca09a85e6347831ad44994aa46da18
3
- size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2cbc4a2b03e9738021be3965f5083d23904411fa2d3c32ac712544fd2ddf0d6
3
+ size 5496
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3707642.out ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ File "/scratch/elec/puhe/p/palp3/MUCS/eval_script_indicwav2vec.py", line 695
2
+ '''
3
+ IndentationError: unexpected indent
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3707729.out ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
2
+ torch.utils._pytree._register_pytree_node(
3
+ Traceback (most recent call last):
4
+ File "/scratch/elec/puhe/p/palp3/MUCS/eval_script_indicwav2vec.py", line 40, in <module>
5
+ import datasets
6
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/__init__.py", line 17, in <module>
7
+ from .arrow_dataset import Dataset
8
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 76, in <module>
9
+ from .arrow_reader import ArrowReader
10
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/arrow_reader.py", line 32, in <module>
11
+ from .download.download_config import DownloadConfig
12
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/download/__init__.py", line 9, in <module>
13
+ from .download_manager import DownloadManager, DownloadMode
14
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/download/download_manager.py", line 33, in <module>
15
+ from ..utils import tqdm as hf_tqdm
16
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/utils/__init__.py", line 17, in <module>
17
+ from .info_utils import VerificationMode
18
+ File "/scratch/work/palp3/myenv/lib/python3.11/site-packages/datasets/utils/info_utils.py", line 5, in <module>
19
+ from huggingface_hub.utils import insecure_hashlib
20
+ ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/scratch/work/palp3/myenv/lib/python3.11/site-packages/huggingface_hub/utils/__init__.py)
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3708006.out ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb: Currently logged in as: priyanshi-pal (priyanshipal). Use `wandb login --relogin` to force relogin
2
+ wandb: wandb version 0.18.3 is available! To upgrade, please run:
3
+ wandb: $ pip install wandb --upgrade
4
+ wandb: Tracking run with wandb version 0.17.6
5
+ wandb: Run data is saved locally in /scratch/elec/t405-puhe/p/palp3/MUCS/wandb/run-20241014_231835-vogh0pi1
6
+ wandb: Run `wandb offline` to turn off syncing.
7
+ wandb: Syncing run transliterated_wer_glamorous_tree_37
8
+ wandb: ⭐️ View project at https://wandb.ai/priyanshipal/huggingface
9
+ wandb: 🚀 View run at https://wandb.ai/priyanshipal/huggingface/runs/vogh0pi1
10
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/training_args.py:1545: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead
11
+ warnings.warn(
12
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py:991: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
13
+ warnings.warn(
14
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/auto/feature_extraction_auto.py:331: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
15
+ warnings.warn(
16
+ Wav2Vec2CTCTokenizer(name_or_path='', vocab_size=149, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '[UNK]', 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False), added_tokens_decoder={
17
+ 147: AddedToken("[UNK]", rstrip=True, lstrip=True, single_word=False, normalized=False, special=False),
18
+ 148: AddedToken("[PAD]", rstrip=True, lstrip=True, single_word=False, normalized=False, special=False),
19
+ 149: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
20
+ 150: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
21
+ }
22
+ CHECK MODEL PARAMS Wav2Vec2ForCTC(
23
+ (wav2vec2): Wav2Vec2Model(
24
+ (feature_extractor): Wav2Vec2FeatureEncoder(
25
+ (conv_layers): ModuleList(
26
+ (0): Wav2Vec2LayerNormConvLayer(
27
+ (conv): Conv1d(1, 512, kernel_size=(10,), stride=(5,))
28
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
29
+ (activation): GELUActivation()
30
+ )
31
+ (1-4): 4 x Wav2Vec2LayerNormConvLayer(
32
+ (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))
33
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
34
+ (activation): GELUActivation()
35
+ )
36
+ (5-6): 2 x Wav2Vec2LayerNormConvLayer(
37
+ (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))
38
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
39
+ (activation): GELUActivation()
40
+ )
41
+ )
42
+ )
43
+ (feature_projection): Wav2Vec2FeatureProjection(
44
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
45
+ (projection): Linear(in_features=512, out_features=1024, bias=True)
46
+ (dropout): Dropout(p=0.3, inplace=False)
47
+ )
48
+ (encoder): Wav2Vec2EncoderStableLayerNorm(
49
+ (pos_conv_embed): Wav2Vec2PositionalConvEmbedding(
50
+ (conv): ParametrizedConv1d(
51
+ 1024, 1024, kernel_size=(128,), stride=(1,), padding=(64,), groups=16
52
+ (parametrizations): ModuleDict(
53
+ (weight): ParametrizationList(
54
+ (0): _WeightNorm()
55
+ )
56
+ )
57
+ )
58
+ (padding): Wav2Vec2SamePadLayer()
59
+ (activation): GELUActivation()
60
+ )
61
+ (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
62
+ (dropout): Dropout(p=0.2, inplace=False)
63
+ (layers): ModuleList(
64
+ (0-23): 24 x Wav2Vec2EncoderLayerStableLayerNorm(
65
+ (attention): Wav2Vec2SdpaAttention(
66
+ (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
67
+ (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
68
+ (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
69
+ (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
70
+ )
71
+ (dropout): Dropout(p=0.2, inplace=False)
72
+ (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
73
+ (feed_forward): Wav2Vec2FeedForward(
74
+ (intermediate_dropout): Dropout(p=0.0, inplace=False)
75
+ (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
76
+ (intermediate_act_fn): GELUActivation()
77
+ (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
78
+ (output_dropout): Dropout(p=0.2, inplace=False)
79
+ )
80
+ (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
81
+ )
82
+ )
83
+ )
84
+ )
85
+ (dropout): Dropout(p=0.0, inplace=False)
86
+ (lm_head): Linear(in_features=1024, out_features=151, bias=True)
87
+ )
88
+ check the eval set length 572
89
+ Traceback (most recent call last):
90
+ File "/scratch/elec/puhe/p/palp3/MUCS/eval_script_indicwav2vec.py", line 826, in <module>
91
+ main()
92
+ File "/scratch/elec/puhe/p/palp3/MUCS/eval_script_indicwav2vec.py", line 691, in main
93
+ return metrics
94
+ ^^^^^^^
95
+ UnboundLocalError: cannot access local variable 'metrics' where it is not associated with a value
96
+ wandb: - 0.011 MB of 0.011 MB uploaded
97
+ wandb: ⭐️ View project at: https://wandb.ai/priyanshipal/huggingface
98
+ wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
99
+ wandb: Find logs at: ./wandb/run-20241014_231835-vogh0pi1/logs
100
+ wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3708271.out ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  0%| | 0/36 [00:00<?, ?it/s]
1
  6%|▌ | 2/36 [00:01<00:26, 1.30it/s]
2
  8%|▊ | 3/36 [00:02<00:33, 1.00s/it]
3
  11%|█ | 4/36 [00:04<00:41, 1.29s/it]
4
  14%|█▍ | 5/36 [00:06<00:42, 1.38s/it]
5
  17%|█▋ | 6/36 [00:07<00:41, 1.38s/it]
6
  19%|█▉ | 7/36 [00:08<00:35, 1.22s/it]
7
  22%|██▏ | 8/36 [00:08<00:27, 1.03it/s]
8
  25%|██▌ | 9/36 [00:09<00:22, 1.19it/s]
9
  28%|██▊ | 10/36 [00:10<00:20, 1.24it/s]
10
  31%|███ | 11/36 [00:11<00:20, 1.22it/s]
11
  33%|███▎ | 12/36 [00:11<00:19, 1.22it/s]
12
  36%|███▌ | 13/36 [00:12<00:17, 1.28it/s]
13
  39%|███▉ | 14/36 [00:12<00:14, 1.48it/s]
14
  42%|████▏ | 15/36 [00:13<00:12, 1.67it/s]
15
  44%|████▍ | 16/36 [00:13<00:10, 1.83it/s]
16
  47%|████▋ | 17/36 [00:14<00:09, 1.92it/s]
17
  50%|█████ | 18/36 [00:14<00:09, 1.86it/s]
18
  53%|█████▎ | 19/36 [00:15<00:09, 1.77it/s]
19
  56%|█████▌ | 20/36 [00:15<00:08, 1.83it/s]
20
  58%|█████▊ | 21/36 [00:16<00:07, 1.97it/s]
21
  61%|██████ | 22/36 [00:16<00:07, 1.97it/s]
22
  64%|██████▍ | 23/36 [00:17<00:06, 1.93it/s]
23
  67%|██████▋ | 24/36 [00:17<00:06, 1.92it/s]
24
  69%|██████▉ | 25/36 [00:18<00:05, 1.84it/s]
25
  72%|███████▏ | 26/36 [00:19<00:05, 1.86it/s]
26
  75%|███████▌ | 27/36 [00:19<00:04, 1.95it/s]
27
  78%|███████▊ | 28/36 [00:20<00:04, 1.62it/s]
28
  81%|████████ | 29/36 [00:22<00:06, 1.08it/s]
29
  83%|████████▎ | 30/36 [00:23<00:06, 1.07s/it]
30
  86%|████████▌ | 31/36 [00:25<00:06, 1.34s/it]
31
  89%|████████▉ | 32/36 [00:26<00:04, 1.12s/it]
32
  92%|█████████▏| 33/36 [00:26<00:02, 1.04it/s]
33
  94%|█████████▍| 34/36 [00:27<00:01, 1.21it/s]
34
  97%|█████████▋| 35/36 [00:27<00:00, 1.36it/s]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb: Currently logged in as: priyanshi-pal (priyanshipal). Use `wandb login --relogin` to force relogin
2
+ wandb: wandb version 0.18.3 is available! To upgrade, please run:
3
+ wandb: $ pip install wandb --upgrade
4
+ wandb: Tracking run with wandb version 0.17.6
5
+ wandb: Run data is saved locally in /scratch/elec/t405-puhe/p/palp3/MUCS/wandb/run-20241014_232133-214gwh3b
6
+ wandb: Run `wandb offline` to turn off syncing.
7
+ wandb: Syncing run transliterated_wer_glamorous_tree_37
8
+ wandb: ⭐️ View project at https://wandb.ai/priyanshipal/huggingface
9
+ wandb: 🚀 View run at https://wandb.ai/priyanshipal/huggingface/runs/214gwh3b
10
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/training_args.py:1545: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead
11
+ warnings.warn(
12
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py:991: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
13
+ warnings.warn(
14
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/auto/feature_extraction_auto.py:331: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
15
+ warnings.warn(
16
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/accelerate/accelerator.py:488: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
17
+ self.scaler = torch.cuda.amp.GradScaler(**kwargs)
18
+ max_steps is given, it will override any value given in num_train_epochs
19
+ Wav2Vec2CTCTokenizer(name_or_path='', vocab_size=149, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '[UNK]', 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False), added_tokens_decoder={
20
+ 147: AddedToken("[UNK]", rstrip=True, lstrip=True, single_word=False, normalized=False, special=False),
21
+ 148: AddedToken("[PAD]", rstrip=True, lstrip=True, single_word=False, normalized=False, special=False),
22
+ 149: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
23
+ 150: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
24
+ }
25
+ CHECK MODEL PARAMS Wav2Vec2ForCTC(
26
+ (wav2vec2): Wav2Vec2Model(
27
+ (feature_extractor): Wav2Vec2FeatureEncoder(
28
+ (conv_layers): ModuleList(
29
+ (0): Wav2Vec2LayerNormConvLayer(
30
+ (conv): Conv1d(1, 512, kernel_size=(10,), stride=(5,))
31
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
32
+ (activation): GELUActivation()
33
+ )
34
+ (1-4): 4 x Wav2Vec2LayerNormConvLayer(
35
+ (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))
36
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
37
+ (activation): GELUActivation()
38
+ )
39
+ (5-6): 2 x Wav2Vec2LayerNormConvLayer(
40
+ (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))
41
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
42
+ (activation): GELUActivation()
43
+ )
44
+ )
45
+ )
46
+ (feature_projection): Wav2Vec2FeatureProjection(
47
+ (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
48
+ (projection): Linear(in_features=512, out_features=1024, bias=True)
49
+ (dropout): Dropout(p=0.3, inplace=False)
50
+ )
51
+ (encoder): Wav2Vec2EncoderStableLayerNorm(
52
+ (pos_conv_embed): Wav2Vec2PositionalConvEmbedding(
53
+ (conv): ParametrizedConv1d(
54
+ 1024, 1024, kernel_size=(128,), stride=(1,), padding=(64,), groups=16
55
+ (parametrizations): ModuleDict(
56
+ (weight): ParametrizationList(
57
+ (0): _WeightNorm()
58
+ )
59
+ )
60
+ )
61
+ (padding): Wav2Vec2SamePadLayer()
62
+ (activation): GELUActivation()
63
+ )
64
+ (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
65
+ (dropout): Dropout(p=0.2, inplace=False)
66
+ (layers): ModuleList(
67
+ (0-23): 24 x Wav2Vec2EncoderLayerStableLayerNorm(
68
+ (attention): Wav2Vec2SdpaAttention(
69
+ (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
70
+ (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
71
+ (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
72
+ (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
73
+ )
74
+ (dropout): Dropout(p=0.2, inplace=False)
75
+ (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
76
+ (feed_forward): Wav2Vec2FeedForward(
77
+ (intermediate_dropout): Dropout(p=0.0, inplace=False)
78
+ (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
79
+ (intermediate_act_fn): GELUActivation()
80
+ (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
81
+ (output_dropout): Dropout(p=0.2, inplace=False)
82
+ )
83
+ (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
84
+ )
85
+ )
86
+ )
87
+ )
88
+ (dropout): Dropout(p=0.0, inplace=False)
89
+ (lm_head): Linear(in_features=1024, out_features=151, bias=True)
90
+ )
91
+ check the eval set length 572
92
+ 10/14/2024 23:21:46 - INFO - __main__ - *** Evaluate ***
93
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py:157: UserWarning: `as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process your labels by using the argument `text` of the regular `__call__` method (either in the same call as your audio inputs, or in a separate call.
94
+ warnings.warn(
95
+
96
  0%| | 0/36 [00:00<?, ?it/s]
97
  6%|▌ | 2/36 [00:01<00:26, 1.30it/s]
98
  8%|▊ | 3/36 [00:02<00:33, 1.00s/it]
99
  11%|█ | 4/36 [00:04<00:41, 1.29s/it]
100
  14%|█▍ | 5/36 [00:06<00:42, 1.38s/it]
101
  17%|█▋ | 6/36 [00:07<00:41, 1.38s/it]
102
  19%|█▉ | 7/36 [00:08<00:35, 1.22s/it]
103
  22%|██▏ | 8/36 [00:08<00:27, 1.03it/s]
104
  25%|██▌ | 9/36 [00:09<00:22, 1.19it/s]
105
  28%|██▊ | 10/36 [00:10<00:20, 1.24it/s]
106
  31%|███ | 11/36 [00:11<00:20, 1.22it/s]
107
  33%|███▎ | 12/36 [00:11<00:19, 1.22it/s]
108
  36%|███▌ | 13/36 [00:12<00:17, 1.28it/s]
109
  39%|███▉ | 14/36 [00:12<00:14, 1.48it/s]
110
  42%|████▏ | 15/36 [00:13<00:12, 1.67it/s]
111
  44%|████▍ | 16/36 [00:13<00:10, 1.83it/s]
112
  47%|████▋ | 17/36 [00:14<00:09, 1.92it/s]
113
  50%|█████ | 18/36 [00:14<00:09, 1.86it/s]
114
  53%|█████▎ | 19/36 [00:15<00:09, 1.77it/s]
115
  56%|█████▌ | 20/36 [00:15<00:08, 1.83it/s]
116
  58%|█████▊ | 21/36 [00:16<00:07, 1.97it/s]
117
  61%|██████ | 22/36 [00:16<00:07, 1.97it/s]
118
  64%|██████▍ | 23/36 [00:17<00:06, 1.93it/s]
119
  67%|██████▋ | 24/36 [00:17<00:06, 1.92it/s]
120
  69%|██████▉ | 25/36 [00:18<00:05, 1.84it/s]
121
  72%|███████▏ | 26/36 [00:19<00:05, 1.86it/s]
122
  75%|███████▌ | 27/36 [00:19<00:04, 1.95it/s]
123
  78%|███████▊ | 28/36 [00:20<00:04, 1.62it/s]
124
  81%|████████ | 29/36 [00:22<00:06, 1.08it/s]
125
  83%|████████▎ | 30/36 [00:23<00:06, 1.07s/it]
126
  86%|████████▌ | 31/36 [00:25<00:06, 1.34s/it]
127
  89%|████████▉ | 32/36 [00:26<00:04, 1.12s/it]
128
  92%|█████████▏| 33/36 [00:26<00:02, 1.04it/s]
129
  94%|█████████▍| 34/36 [00:27<00:01, 1.21it/s]
130
  97%|█████████▋| 35/36 [00:27<00:00, 1.36it/s]
131
+ /scratch/work/palp3/myenv/lib/python3.11/site-packages/huggingface_hub/hf_api.py:3889: UserWarning: It seems that you are about to commit a data file (json/default-b60d5edd0f197c71/0.0.0/7483f22a71512872c377524b97484f6d20c275799bb9e7cd8fb3198178d8220a/json-train.arrow) to a model repository. You are sure this is intended? If you are trying to upload a dataset, please set `repo_type='dataset'` or `--repo-type=dataset` in a CLI.
132
+ warnings.warn(
133
+ Printing predictions for a few samples:
134
+ Sample 1:
135
+ Reference (English): हम उनका उपयोग ऐसे ही कर सकते हैं या आवश्यकता अनुसार कुछ बदलाव करके उपयोग कर सकते हैं
136
+ ######
137
+ Prediction (English): हम उनका उपयोग ऐसे ही कर सकते हैं
138
+
139
+
140
+
141
+ Sample 2:
142
+ Reference (English): अतः शीर्षक इस तरह से जोड़ सकते हैं
143
+ ######
144
+ Prediction (English): अतः शीर्ष है
145
+
146
+
147
+
148
+ Sample 3:
149
+ Reference (English): प्रेसेंटेशन के अंत में आपने स्लाइड की एक कॉपी बना ली है
150
+ ######
151
+ Prediction (English): presentation के अंत में आपने स ैंैं
152
+
153
+
154
+
155
+ Sample 4:
156
+ Reference (English): चलिए अब फोंट्स और फोंट्स को फॉर्मेट करने के कुछ तरीके देखते हैं
157
+ ######
158
+ Prediction (English): चलिए अब fonts और fonts को format करने के कुछ तरीके देेहं
159
+
160
+
161
+
162
+ Sample 5:
163
+ Reference (English): यह एक डायलॉग बॉक्स खोलेगा जिसमें हम अपनी आवश्यकतानुसार फॉन्ट स्टाइल और साइज़ सेट कर सकते हैं
164
+ ######
165
+ Prediction (English): यह एक dialog box खोलेगा जिसमें हम अपनी आवश्यकत हैहै
166
+
167
+
168
+
169
+ Last Reference string यह स्क्रिप्ट लता द्वारा अनुवादित है आईआईटी मुंबई की ओर से मैं रवि कुमार अब आपसे विदा लेता हूँहमसे जुड़ने के लिए धन्यवाद
170
+
171
+
172
+ Last Prediction string लता द्वारा अनुवादित है आई आई टी मुmबई की ओर से मैं रवि कुमार अब आपसे विदा लेता हूँ हमसे जुड़ने के लिए धन्यवाद
173
+ ***** eval metrics *****
174
+ eval_cer = 0.4569
175
+ eval_loss = 2.2188
176
+ eval_model_preparation_time = 0.0044
177
+ eval_runtime = 0:00:39.87
178
+ eval_samples = 572
179
+ eval_samples_per_second = 14.344
180
+ eval_steps_per_second = 0.903
181
+ eval_wer = 0.5264
182
+