davelotito's picture
End of training
3016555 verified
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut_experiment_bayesian_trial_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_10
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4219
- Bleu: 0.0632
- Precisions: [0.809322033898305, 0.7493975903614458, 0.7094972067039106, 0.6644518272425249]
- Brevity Penalty: 0.0864
- Length Ratio: 0.2899
- Translation Length: 472
- Reference Length: 1628
- Cer: 0.7596
- Wer: 0.8312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.0082458996730595e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.3888 | 1.0 | 253 | 0.5110 | 0.0673 | [0.7909836065573771, 0.7099767981438515, 0.6657754010695187, 0.6277602523659306] | 0.0967 | 0.2998 | 488 | 1628 | 0.7690 | 0.8412 |
| 0.326 | 2.0 | 506 | 0.4539 | 0.0654 | [0.7908902691511387, 0.7276995305164319, 0.6775067750677507, 0.6153846153846154] | 0.0934 | 0.2967 | 483 | 1628 | 0.7604 | 0.8362 |
| 0.3191 | 3.0 | 759 | 0.4256 | 0.0654 | [0.7837837837837838, 0.7287735849056604, 0.6893732970027248, 0.6451612903225806] | 0.0921 | 0.2955 | 481 | 1628 | 0.7599 | 0.8331 |
| 0.2632 | 4.0 | 1012 | 0.4219 | 0.0632 | [0.809322033898305, 0.7493975903614458, 0.7094972067039106, 0.6644518272425249] | 0.0864 | 0.2899 | 472 | 1628 | 0.7596 | 0.8312 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1