| --- |
| library_name: transformers |
| license: apache-2.0 |
| base_model: TomasFAV/BERTInvoiceCzechV01 |
| tags: |
| - generated_from_trainer |
| metrics: |
| - precision |
| - recall |
| - f1 |
| - accuracy |
| model-index: |
| - name: BERTInvoiceCzechV013 |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # BERTInvoiceCzechV013 |
|
|
| This model is a fine-tuned version of [TomasFAV/BERTInvoiceCzechV01](https://huggingface.co/TomasFAV/BERTInvoiceCzechV01) on an unknown dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0656 |
| - Precision: 0.8707 |
| - Recall: 0.8816 |
| - F1: 0.8761 |
| - Accuracy: 0.9835 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 1e-05 |
| - train_batch_size: 16 |
| - eval_batch_size: 2 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: linear |
| - lr_scheduler_warmup_steps: 0.1 |
| - num_epochs: 20 |
| - mixed_precision_training: Native AMP |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
| |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
| | No log | 1.0 | 20 | 0.1306 | 0.7273 | 0.7581 | 0.7423 | 0.9645 | |
| | No log | 2.0 | 40 | 0.1137 | 0.7402 | 0.8054 | 0.7714 | 0.9678 | |
| | No log | 3.0 | 60 | 0.0909 | 0.7828 | 0.8369 | 0.8089 | 0.9740 | |
| | No log | 4.0 | 80 | 0.0766 | 0.8162 | 0.8711 | 0.8428 | 0.9785 | |
| | No log | 5.0 | 100 | 0.0761 | 0.8141 | 0.8858 | 0.8484 | 0.9784 | |
| | No log | 6.0 | 120 | 0.0785 | 0.7960 | 0.8850 | 0.8382 | 0.9769 | |
| | No log | 7.0 | 140 | 0.0662 | 0.8520 | 0.8722 | 0.8620 | 0.9817 | |
| | No log | 8.0 | 160 | 0.0722 | 0.8378 | 0.8765 | 0.8567 | 0.9810 | |
| | No log | 9.0 | 180 | 0.0712 | 0.8250 | 0.8827 | 0.8529 | 0.9801 | |
| | No log | 10.0 | 200 | 0.0670 | 0.8544 | 0.8819 | 0.8680 | 0.9823 | |
| | No log | 11.0 | 220 | 0.0663 | 0.8518 | 0.8909 | 0.8709 | 0.9828 | |
| | No log | 12.0 | 240 | 0.0680 | 0.8341 | 0.8843 | 0.8584 | 0.9809 | |
| | No log | 13.0 | 260 | 0.0656 | 0.8704 | 0.8816 | 0.8759 | 0.9835 | |
| | No log | 14.0 | 280 | 0.0655 | 0.8566 | 0.8885 | 0.8723 | 0.9827 | |
| | No log | 15.0 | 300 | 0.0659 | 0.8466 | 0.8831 | 0.8645 | 0.9822 | |
| | No log | 16.0 | 320 | 0.0662 | 0.8483 | 0.8862 | 0.8669 | 0.9821 | |
| | No log | 17.0 | 340 | 0.0689 | 0.8402 | 0.8885 | 0.8637 | 0.9815 | |
| | No log | 18.0 | 360 | 0.0662 | 0.8566 | 0.8905 | 0.8732 | 0.9829 | |
| | No log | 19.0 | 380 | 0.0670 | 0.8519 | 0.8893 | 0.8702 | 0.9824 | |
| | No log | 20.0 | 400 | 0.0663 | 0.8541 | 0.8885 | 0.8710 | 0.9825 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 5.0.0 |
| - Pytorch 2.10.0+cu128 |
| - Datasets 4.0.0 |
| - Tokenizers 0.22.2 |
|
|