| --- |
| library_name: transformers |
| license: apache-2.0 |
| base_model: TomasFAV/BERTInvoiceCzechV012 |
| tags: |
| - generated_from_trainer |
| metrics: |
| - precision |
| - recall |
| - f1 |
| - accuracy |
| model-index: |
| - name: BERTInvoiceCzechV0123Test |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # BERTInvoiceCzechV0123Test |
|
|
| This model is a fine-tuned version of [TomasFAV/BERTInvoiceCzechV012](https://huggingface.co/TomasFAV/BERTInvoiceCzechV012) on an unknown dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0612 |
| - Precision: 0.8944 |
| - Recall: 0.9177 |
| - F1: 0.9059 |
| - Accuracy: 0.9856 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 1e-05 |
| - train_batch_size: 16 |
| - eval_batch_size: 2 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: linear |
| - lr_scheduler_warmup_steps: 0.1 |
| - num_epochs: 20 |
| - mixed_precision_training: Native AMP |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
| |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
| | No log | 1.0 | 20 | 0.1056 | 0.7803 | 0.84 | 0.8091 | 0.9716 | |
| | No log | 2.0 | 40 | 0.0831 | 0.8105 | 0.8901 | 0.8484 | 0.9764 | |
| | No log | 3.0 | 60 | 0.0704 | 0.8410 | 0.8994 | 0.8692 | 0.9804 | |
| | No log | 4.0 | 80 | 0.0675 | 0.8403 | 0.9095 | 0.8736 | 0.9808 | |
| | No log | 5.0 | 100 | 0.0632 | 0.8630 | 0.8932 | 0.8779 | 0.9821 | |
| | No log | 6.0 | 120 | 0.0706 | 0.8319 | 0.9111 | 0.8697 | 0.9800 | |
| | No log | 7.0 | 140 | 0.0611 | 0.8729 | 0.8932 | 0.8829 | 0.9834 | |
| | No log | 8.0 | 160 | 0.0608 | 0.8754 | 0.9056 | 0.8902 | 0.9835 | |
| | No log | 9.0 | 180 | 0.0595 | 0.8769 | 0.9243 | 0.9000 | 0.9848 | |
| | No log | 10.0 | 200 | 0.0606 | 0.8759 | 0.9153 | 0.8952 | 0.9842 | |
| | No log | 11.0 | 220 | 0.0610 | 0.8855 | 0.9192 | 0.9021 | 0.9850 | |
| | No log | 12.0 | 240 | 0.0632 | 0.8720 | 0.9258 | 0.8981 | 0.9844 | |
| | No log | 13.0 | 260 | 0.0608 | 0.8961 | 0.9115 | 0.9037 | 0.9853 | |
| | No log | 14.0 | 280 | 0.0610 | 0.8953 | 0.9165 | 0.9058 | 0.9855 | |
| | No log | 15.0 | 300 | 0.0615 | 0.8874 | 0.9181 | 0.9025 | 0.9853 | |
| | No log | 16.0 | 320 | 0.0627 | 0.8841 | 0.9216 | 0.9025 | 0.9851 | |
| | No log | 17.0 | 340 | 0.0625 | 0.8807 | 0.92 | 0.8999 | 0.9847 | |
| | No log | 18.0 | 360 | 0.0612 | 0.8944 | 0.9177 | 0.9059 | 0.9856 | |
| | No log | 19.0 | 380 | 0.0619 | 0.8893 | 0.92 | 0.9044 | 0.9854 | |
| | No log | 20.0 | 400 | 0.0618 | 0.8901 | 0.9212 | 0.9053 | 0.9856 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 5.0.0 |
| - Pytorch 2.10.0+cu128 |
| - Datasets 4.0.0 |
| - Tokenizers 0.22.2 |
|
|