End of training
Browse files
README.md
CHANGED
|
@@ -18,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 18 |
|
| 19 |
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
|
| 20 |
It achieves the following results on the evaluation set:
|
| 21 |
-
- Loss: 0.
|
| 22 |
-
- Bleu: 0.
|
| 23 |
-
- Precisions: [0.
|
| 24 |
-
- Brevity Penalty: 0.
|
| 25 |
-
- Length Ratio: 0.
|
| 26 |
-
- Translation Length:
|
| 27 |
- Reference Length: 1628
|
| 28 |
-
- Cer: 0.
|
| 29 |
-
- Wer: 0.
|
| 30 |
|
| 31 |
## Model description
|
| 32 |
|
|
@@ -45,7 +45,7 @@ More information needed
|
|
| 45 |
### Training hyperparameters
|
| 46 |
|
| 47 |
The following hyperparameters were used during training:
|
| 48 |
-
- learning_rate: 1.
|
| 49 |
- train_batch_size: 1
|
| 50 |
- eval_batch_size: 1
|
| 51 |
- seed: 42
|
|
@@ -53,15 +53,18 @@ The following hyperparameters were used during training:
|
|
| 53 |
- total_train_batch_size: 2
|
| 54 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 55 |
- lr_scheduler_type: linear
|
| 56 |
-
- num_epochs:
|
| 57 |
- mixed_precision_training: Native AMP
|
| 58 |
|
| 59 |
### Training results
|
| 60 |
|
| 61 |
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|
| 62 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
|
| 63 |
-
| 0.
|
| 64 |
-
| 0.
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
|
| 67 |
### Framework versions
|
|
|
|
| 18 |
|
| 19 |
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
|
| 20 |
It achieves the following results on the evaluation set:
|
| 21 |
+
- Loss: 0.4050
|
| 22 |
+
- Bleu: 0.0639
|
| 23 |
+
- Precisions: [0.79957805907173, 0.7386091127098321, 0.7083333333333334, 0.6765676567656765]
|
| 24 |
+
- Brevity Penalty: 0.0876
|
| 25 |
+
- Length Ratio: 0.2912
|
| 26 |
+
- Translation Length: 474
|
| 27 |
- Reference Length: 1628
|
| 28 |
+
- Cer: 0.7653
|
| 29 |
+
- Wer: 0.8371
|
| 30 |
|
| 31 |
## Model description
|
| 32 |
|
|
|
|
| 45 |
### Training hyperparameters
|
| 46 |
|
| 47 |
The following hyperparameters were used during training:
|
| 48 |
+
- learning_rate: 1.2045081648781836e-05
|
| 49 |
- train_batch_size: 1
|
| 50 |
- eval_batch_size: 1
|
| 51 |
- seed: 42
|
|
|
|
| 53 |
- total_train_batch_size: 2
|
| 54 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 55 |
- lr_scheduler_type: linear
|
| 56 |
+
- num_epochs: 5
|
| 57 |
- mixed_precision_training: Native AMP
|
| 58 |
|
| 59 |
### Training results
|
| 60 |
|
| 61 |
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|
| 62 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
|
| 63 |
+
| 0.9353 | 1.0 | 253 | 0.6228 | 0.0486 | [0.7096774193548387, 0.6053921568627451, 0.5612535612535613, 0.5102040816326531] | 0.0820 | 0.2856 | 465 | 1628 | 0.7751 | 0.8592 |
|
| 64 |
+
| 0.462 | 2.0 | 506 | 0.4846 | 0.0568 | [0.7913978494623656, 0.7058823529411765, 0.6609686609686609, 0.6224489795918368] | 0.0820 | 0.2856 | 465 | 1628 | 0.7650 | 0.8423 |
|
| 65 |
+
| 0.4071 | 3.0 | 759 | 0.4226 | 0.0626 | [0.7899159663865546, 0.711217183770883, 0.6767955801104972, 0.6459016393442623] | 0.0889 | 0.2924 | 476 | 1628 | 0.7685 | 0.8436 |
|
| 66 |
+
| 0.3007 | 4.0 | 1012 | 0.4092 | 0.0638 | [0.7957894736842105, 0.7344497607655502, 0.7008310249307479, 0.6644736842105263] | 0.0883 | 0.2918 | 475 | 1628 | 0.7640 | 0.8397 |
|
| 67 |
+
| 0.3114 | 5.0 | 1265 | 0.4050 | 0.0639 | [0.79957805907173, 0.7386091127098321, 0.7083333333333334, 0.6765676567656765] | 0.0876 | 0.2912 | 474 | 1628 | 0.7653 | 0.8371 |
|
| 68 |
|
| 69 |
|
| 70 |
### Framework versions
|
metrics.jsonl
CHANGED
|
@@ -2,3 +2,4 @@
|
|
| 2 |
{"eval_loss": 0.4845726191997528, "eval_bleu": 0.05677403569488881, "eval_precisions": [0.7913978494623656, 0.7058823529411765, 0.6609686609686609, 0.6224489795918368], "eval_brevity_penalty": 0.08199678262097645, "eval_length_ratio": 0.2856265356265356, "eval_translation_length": 465, "eval_reference_length": 1628, "eval_cer": 0.7650115721103663, "eval_wer": 0.8422781322989256, "eval_runtime": 74.2244, "eval_samples_per_second": 0.768, "eval_steps_per_second": 0.768, "epoch": 2.0}
|
| 3 |
{"eval_loss": 0.4225582778453827, "eval_bleu": 0.06258728631080523, "eval_precisions": [0.7899159663865546, 0.711217183770883, 0.6767955801104972, 0.6459016393442623], "eval_brevity_penalty": 0.08890667390552527, "eval_length_ratio": 0.29238329238329236, "eval_translation_length": 476, "eval_reference_length": 1628, "eval_cer": 0.7685442110284281, "eval_wer": 0.8435798205201185, "eval_runtime": 74.6527, "eval_samples_per_second": 0.764, "eval_steps_per_second": 0.764, "epoch": 3.0}
|
| 4 |
{"eval_loss": 0.409177303314209, "eval_bleu": 0.06375586413177027, "eval_precisions": [0.7957894736842105, 0.7344497607655502, 0.7008310249307479, 0.6644736842105263], "eval_brevity_penalty": 0.08826881356184386, "eval_length_ratio": 0.2917690417690418, "eval_translation_length": 475, "eval_reference_length": 1628, "eval_cer": 0.7640123871107425, "eval_wer": 0.839666929775041, "eval_runtime": 74.6634, "eval_samples_per_second": 0.763, "eval_steps_per_second": 0.763, "epoch": 4.0}
|
|
|
|
|
|
| 2 |
{"eval_loss": 0.4845726191997528, "eval_bleu": 0.05677403569488881, "eval_precisions": [0.7913978494623656, 0.7058823529411765, 0.6609686609686609, 0.6224489795918368], "eval_brevity_penalty": 0.08199678262097645, "eval_length_ratio": 0.2856265356265356, "eval_translation_length": 465, "eval_reference_length": 1628, "eval_cer": 0.7650115721103663, "eval_wer": 0.8422781322989256, "eval_runtime": 74.2244, "eval_samples_per_second": 0.768, "eval_steps_per_second": 0.768, "epoch": 2.0}
|
| 3 |
{"eval_loss": 0.4225582778453827, "eval_bleu": 0.06258728631080523, "eval_precisions": [0.7899159663865546, 0.711217183770883, 0.6767955801104972, 0.6459016393442623], "eval_brevity_penalty": 0.08890667390552527, "eval_length_ratio": 0.29238329238329236, "eval_translation_length": 476, "eval_reference_length": 1628, "eval_cer": 0.7685442110284281, "eval_wer": 0.8435798205201185, "eval_runtime": 74.6527, "eval_samples_per_second": 0.764, "eval_steps_per_second": 0.764, "epoch": 3.0}
|
| 4 |
{"eval_loss": 0.409177303314209, "eval_bleu": 0.06375586413177027, "eval_precisions": [0.7957894736842105, 0.7344497607655502, 0.7008310249307479, 0.6644736842105263], "eval_brevity_penalty": 0.08826881356184386, "eval_length_ratio": 0.2917690417690418, "eval_translation_length": 475, "eval_reference_length": 1628, "eval_cer": 0.7640123871107425, "eval_wer": 0.839666929775041, "eval_runtime": 74.6634, "eval_samples_per_second": 0.763, "eval_steps_per_second": 0.763, "epoch": 4.0}
|
| 5 |
+
{"eval_loss": 0.40500399470329285, "eval_bleu": 0.06391799322352495, "eval_precisions": [0.79957805907173, 0.7386091127098321, 0.7083333333333334, 0.6765676567656765], "eval_brevity_penalty": 0.08763286710738645, "eval_length_ratio": 0.29115479115479115, "eval_translation_length": 474, "eval_reference_length": 1628, "eval_cer": 0.7653196697608013, "eval_wer": 0.8371441755209847, "eval_runtime": 74.3335, "eval_samples_per_second": 0.767, "eval_steps_per_second": 0.767, "epoch": 5.0}
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 809103512
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c37243ba325b8ab602dbff931cf5aaf2af83b4d72896f6bcea9a364b0b33735
|
| 3 |
size 809103512
|
runs/May30_15-29-21_ip-172-16-168-165.ec2.internal/events.out.tfevents.1717082961.ip-172-16-168-165.ec2.internal.8916.1
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dbdb008a882eef252485b8e67fdaa88aff28672c0e416f7f8f4d23d22fc70e62
|
| 3 |
+
size 15363
|