--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base-finetuned-cord-v2 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: DonutInvoiceCzechV3 results: [] --- # DonutInvoiceCzechV3 This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2946 - Accuracy: 0.9152 - F1: 0.8838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 3.9346 | 1.0 | 46 | 2.6485 | 0.0116 | 0.0064 | | 1.3843 | 2.0 | 92 | 1.2891 | 0.2501 | 0.2407 | | 0.8102 | 3.0 | 138 | 0.7001 | 0.5997 | 0.5336 | | 0.2805 | 4.0 | 184 | 0.4553 | 0.6544 | 0.6571 | | 0.1336 | 5.0 | 230 | 0.3413 | 0.8010 | 0.7771 | | 0.1062 | 6.0 | 276 | 0.2924 | 0.7955 | 0.7831 | | 0.0869 | 7.0 | 322 | 0.2980 | 0.8219 | 0.7988 | | 0.0957 | 8.0 | 368 | 0.3558 | 0.8100 | 0.7938 | | 0.0704 | 9.0 | 414 | 0.3160 | 0.8147 | 0.8055 | | 0.0674 | 10.0 | 460 | 0.3314 | 0.8531 | 0.8247 | | 0.0464 | 11.0 | 506 | 0.3728 | 0.8521 | 0.8146 | | 0.0358 | 12.0 | 552 | 0.3211 | 0.8372 | 0.8079 | | 0.0222 | 13.0 | 598 | 0.3009 | 0.8836 | 0.8420 | | 0.0299 | 14.0 | 644 | 0.2888 | 0.8698 | 0.8362 | | 0.0133 | 15.0 | 690 | 0.3496 | 0.8558 | 0.8459 | | 0.0201 | 16.0 | 736 | 0.2847 | 0.8961 | 0.8665 | | 0.0142 | 17.0 | 782 | 0.3228 | 0.9005 | 0.8652 | | 0.0163 | 18.0 | 828 | 0.3359 | 0.8669 | 0.8310 | | 0.0096 | 19.0 | 874 | 0.3167 | 0.8759 | 0.8488 | | 0.0175 | 20.0 | 920 | 0.2905 | 0.8938 | 0.8687 | | 0.0129 | 21.0 | 966 | 0.3119 | 0.8797 | 0.8570 | | 0.0081 | 22.0 | 1012 | 0.3157 | 0.8780 | 0.8729 | | 0.0036 | 23.0 | 1058 | 0.2950 | 0.9029 | 0.8731 | | 0.0049 | 24.0 | 1104 | 0.3194 | 0.9048 | 0.8632 | | 0.0034 | 25.0 | 1150 | 0.3091 | 0.8987 | 0.8650 | | 0.0012 | 26.0 | 1196 | 0.2910 | 0.8968 | 0.8718 | | 0.0049 | 27.0 | 1242 | 0.2924 | 0.9115 | 0.8769 | | 0.0025 | 28.0 | 1288 | 0.2939 | 0.9040 | 0.8679 | | 0.0014 | 29.0 | 1334 | 0.2946 | 0.9152 | 0.8838 | | 0.0014 | 30.0 | 1380 | 0.3091 | 0.8989 | 0.8676 | | 0.0004 | 31.0 | 1426 | 0.2930 | 0.8991 | 0.8637 | | 0.0005 | 32.0 | 1472 | 0.2962 | 0.8977 | 0.8747 | | 0.0008 | 33.0 | 1518 | 0.2922 | 0.8974 | 0.8665 | | 0.0004 | 34.0 | 1564 | 0.2875 | 0.8982 | 0.8696 | | 0.0004 | 35.0 | 1610 | 0.2895 | 0.8962 | 0.8665 | | 0.0042 | 36.0 | 1656 | 0.2877 | 0.8944 | 0.8665 | | 0.0004 | 37.0 | 1702 | 0.2879 | 0.8965 | 0.8701 | | 0.0003 | 38.0 | 1748 | 0.2875 | 0.8984 | 0.8718 | | 0.0003 | 39.0 | 1794 | 0.2879 | 0.8984 | 0.8718 | | 0.0004 | 40.0 | 1840 | 0.2874 | 0.8984 | 0.8718 | ### Framework versions - Transformers 5.0.0 - Pytorch 2.10.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.2