End of training
Browse files
README.md
CHANGED
|
@@ -15,12 +15,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 15 |
This model was trained from scratch on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
- eval_loss: 2.2188
|
| 18 |
-
- eval_model_preparation_time: 0.
|
| 19 |
- eval_cer: 0.4630
|
| 20 |
- eval_wer: 0.5242
|
| 21 |
-
- eval_runtime:
|
| 22 |
-
- eval_samples_per_second:
|
| 23 |
-
- eval_steps_per_second: 0.
|
| 24 |
- step: 0
|
| 25 |
|
| 26 |
## Model description
|
|
|
|
| 15 |
This model was trained from scratch on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
- eval_loss: 2.2188
|
| 18 |
+
- eval_model_preparation_time: 0.0054
|
| 19 |
- eval_cer: 0.4630
|
| 20 |
- eval_wer: 0.5242
|
| 21 |
+
- eval_runtime: 40.9239
|
| 22 |
+
- eval_samples_per_second: 13.977
|
| 23 |
+
- eval_steps_per_second: 0.88
|
| 24 |
- step: 0
|
| 25 |
|
| 26 |
## Model description
|
all_results.json
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
-
"eval_model_preparation_time": 0.
|
| 5 |
-
"eval_runtime":
|
| 6 |
"eval_samples": 572,
|
| 7 |
-
"eval_samples_per_second":
|
| 8 |
-
"eval_steps_per_second": 0.
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
+
"eval_model_preparation_time": 0.0054,
|
| 5 |
+
"eval_runtime": 40.9239,
|
| 6 |
"eval_samples": 572,
|
| 7 |
+
"eval_samples_per_second": 13.977,
|
| 8 |
+
"eval_steps_per_second": 0.88,
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
eval_results.json
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
-
"eval_model_preparation_time": 0.
|
| 5 |
-
"eval_runtime":
|
| 6 |
"eval_samples": 572,
|
| 7 |
-
"eval_samples_per_second":
|
| 8 |
-
"eval_steps_per_second": 0.
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
+
"eval_model_preparation_time": 0.0054,
|
| 5 |
+
"eval_runtime": 40.9239,
|
| 6 |
"eval_samples": 572,
|
| 7 |
+
"eval_samples_per_second": 13.977,
|
| 8 |
+
"eval_steps_per_second": 0.88,
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5496
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35578e1f3781162e825603b7bf23bbf33522e3303d584ca3309493901dee2d56
|
| 3 |
size 5496
|
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3709578.out
CHANGED
|
@@ -155,3 +155,45 @@ Last Prediction string लता द्वारा अनुवादित ह
|
|
| 155 |
eval_steps_per_second = 0.914
|
| 156 |
eval_wer = 0.5242
|
| 157 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 155 |
eval_steps_per_second = 0.914
|
| 156 |
eval_wer = 0.5242
|
| 157 |
|
| 158 |
+
wandb: - 0.005 MB of 0.039 MB uploaded
|
| 159 |
+
wandb: Run history:
|
| 160 |
+
wandb: eval/cer ▁
|
| 161 |
+
wandb: eval/loss ▁
|
| 162 |
+
wandb: eval/model_preparation_time ▁
|
| 163 |
+
wandb: eval/runtime ▁
|
| 164 |
+
wandb: eval/samples_per_second ▁
|
| 165 |
+
wandb: eval/steps_per_second ▁
|
| 166 |
+
wandb: eval/wer ▁
|
| 167 |
+
wandb: eval_cer ▁
|
| 168 |
+
wandb: eval_loss ▁
|
| 169 |
+
wandb: eval_model_preparation_time ▁
|
| 170 |
+
wandb: eval_runtime ▁
|
| 171 |
+
wandb: eval_samples ▁
|
| 172 |
+
wandb: eval_samples_per_second ▁
|
| 173 |
+
wandb: eval_steps_per_second ▁
|
| 174 |
+
wandb: eval_wer ▁
|
| 175 |
+
wandb: train/global_step ▁▁
|
| 176 |
+
wandb:
|
| 177 |
+
wandb: Run summary:
|
| 178 |
+
wandb: eval/cer 0.46299
|
| 179 |
+
wandb: eval/loss 2.21876
|
| 180 |
+
wandb: eval/model_preparation_time 0.0043
|
| 181 |
+
wandb: eval/runtime 39.3844
|
| 182 |
+
wandb: eval/samples_per_second 14.524
|
| 183 |
+
wandb: eval/steps_per_second 0.914
|
| 184 |
+
wandb: eval/wer 0.52421
|
| 185 |
+
wandb: eval_cer 0.46299
|
| 186 |
+
wandb: eval_loss 2.21876
|
| 187 |
+
wandb: eval_model_preparation_time 0.0043
|
| 188 |
+
wandb: eval_runtime 39.3844
|
| 189 |
+
wandb: eval_samples 572
|
| 190 |
+
wandb: eval_samples_per_second 14.524
|
| 191 |
+
wandb: eval_steps_per_second 0.914
|
| 192 |
+
wandb: eval_wer 0.52421
|
| 193 |
+
wandb: train/global_step 0
|
| 194 |
+
wandb:
|
| 195 |
+
wandb: 🚀 View run transliterated_wer_glamorous_tree_37 at: https://wandb.ai/priyanshipal/huggingface/runs/o0k52lvl
|
| 196 |
+
wandb: ⭐️ View project at: https://wandb.ai/priyanshipal/huggingface
|
| 197 |
+
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
|
| 198 |
+
wandb: Find logs at: ./wandb/run-20241014_235124-o0k52lvl/logs
|
| 199 |
+
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.
|
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3710228.out
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|