End of training
Browse files
README.md
CHANGED
|
@@ -15,12 +15,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 15 |
This model was trained from scratch on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
- eval_loss: 2.2188
|
| 18 |
-
- eval_model_preparation_time: 0.
|
| 19 |
- eval_cer: 0.4630
|
| 20 |
- eval_wer: 0.5242
|
| 21 |
-
- eval_runtime:
|
| 22 |
-
- eval_samples_per_second:
|
| 23 |
-
- eval_steps_per_second: 0.
|
| 24 |
- step: 0
|
| 25 |
|
| 26 |
## Model description
|
|
|
|
| 15 |
This model was trained from scratch on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
- eval_loss: 2.2188
|
| 18 |
+
- eval_model_preparation_time: 0.0052
|
| 19 |
- eval_cer: 0.4630
|
| 20 |
- eval_wer: 0.5242
|
| 21 |
+
- eval_runtime: 46.1107
|
| 22 |
+
- eval_samples_per_second: 12.405
|
| 23 |
+
- eval_steps_per_second: 0.781
|
| 24 |
- step: 0
|
| 25 |
|
| 26 |
## Model description
|
all_results.json
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
-
"eval_model_preparation_time": 0.
|
| 5 |
-
"eval_runtime":
|
| 6 |
"eval_samples": 572,
|
| 7 |
-
"eval_samples_per_second":
|
| 8 |
-
"eval_steps_per_second": 0.
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
+
"eval_model_preparation_time": 0.0052,
|
| 5 |
+
"eval_runtime": 46.1107,
|
| 6 |
"eval_samples": 572,
|
| 7 |
+
"eval_samples_per_second": 12.405,
|
| 8 |
+
"eval_steps_per_second": 0.781,
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
eval_results.json
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
-
"eval_model_preparation_time": 0.
|
| 5 |
-
"eval_runtime":
|
| 6 |
"eval_samples": 572,
|
| 7 |
-
"eval_samples_per_second":
|
| 8 |
-
"eval_steps_per_second": 0.
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"eval_cer": 0.4629913780505626,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
+
"eval_model_preparation_time": 0.0052,
|
| 5 |
+
"eval_runtime": 46.1107,
|
| 6 |
"eval_samples": 572,
|
| 7 |
+
"eval_samples_per_second": 12.405,
|
| 8 |
+
"eval_steps_per_second": 0.781,
|
| 9 |
"eval_wer": 0.5242065233289455
|
| 10 |
}
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5496
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fa17cb2655e1bc3e8ad86df389604a3b1d4c89250cbf9ff3e38de4c3ecf760a6
|
| 3 |
size 5496
|
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3710228.out
CHANGED
|
@@ -1299,3 +1299,45 @@ Last Prediction string लता द्वारा अनुवादित ह
|
|
| 1299 |
eval_steps_per_second = 0.88
|
| 1300 |
eval_wer = 0.5242
|
| 1301 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1299 |
eval_steps_per_second = 0.88
|
| 1300 |
eval_wer = 0.5242
|
| 1301 |
|
| 1302 |
+
wandb: - 0.005 MB of 0.005 MB uploaded
|
| 1303 |
+
wandb: Run history:
|
| 1304 |
+
wandb: eval/cer ▁
|
| 1305 |
+
wandb: eval/loss ▁
|
| 1306 |
+
wandb: eval/model_preparation_time ▁
|
| 1307 |
+
wandb: eval/runtime ▁
|
| 1308 |
+
wandb: eval/samples_per_second ▁
|
| 1309 |
+
wandb: eval/steps_per_second ▁
|
| 1310 |
+
wandb: eval/wer ▁
|
| 1311 |
+
wandb: eval_cer ▁
|
| 1312 |
+
wandb: eval_loss ▁
|
| 1313 |
+
wandb: eval_model_preparation_time ▁
|
| 1314 |
+
wandb: eval_runtime ▁
|
| 1315 |
+
wandb: eval_samples ▁
|
| 1316 |
+
wandb: eval_samples_per_second ▁
|
| 1317 |
+
wandb: eval_steps_per_second ▁
|
| 1318 |
+
wandb: eval_wer ▁
|
| 1319 |
+
wandb: train/global_step ▁▁
|
| 1320 |
+
wandb:
|
| 1321 |
+
wandb: Run summary:
|
| 1322 |
+
wandb: eval/cer 0.46299
|
| 1323 |
+
wandb: eval/loss 2.21876
|
| 1324 |
+
wandb: eval/model_preparation_time 0.0054
|
| 1325 |
+
wandb: eval/runtime 40.9239
|
| 1326 |
+
wandb: eval/samples_per_second 13.977
|
| 1327 |
+
wandb: eval/steps_per_second 0.88
|
| 1328 |
+
wandb: eval/wer 0.52421
|
| 1329 |
+
wandb: eval_cer 0.46299
|
| 1330 |
+
wandb: eval_loss 2.21876
|
| 1331 |
+
wandb: eval_model_preparation_time 0.0054
|
| 1332 |
+
wandb: eval_runtime 40.9239
|
| 1333 |
+
wandb: eval_samples 572
|
| 1334 |
+
wandb: eval_samples_per_second 13.977
|
| 1335 |
+
wandb: eval_steps_per_second 0.88
|
| 1336 |
+
wandb: eval_wer 0.52421
|
| 1337 |
+
wandb: train/global_step 0
|
| 1338 |
+
wandb:
|
| 1339 |
+
wandb: 🚀 View run transliterated_wer_glamorous_tree_37 at: https://wandb.ai/priyanshipal/huggingface/runs/e1ql8pbs
|
| 1340 |
+
wandb: ⭐️ View project at: https://wandb.ai/priyanshipal/huggingface
|
| 1341 |
+
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
|
| 1342 |
+
wandb: Find logs at: ./wandb/run-20241015_001123-e1ql8pbs/logs
|
| 1343 |
+
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.
|
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3711144.out
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|