| --- |
| library_name: transformers |
| license: mit |
| base_model: TomasFAV/LiLTInvoiceCzechV0 |
| tags: |
| - generated_from_trainer |
| metrics: |
| - precision |
| - recall |
| - f1 |
| - accuracy |
| model-index: |
| - name: LiLTInvoiceCzechV03 |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # LiLTInvoiceCzechV03 |
|
|
| This model is a fine-tuned version of [TomasFAV/LiLTInvoiceCzechV0](https://huggingface.co/TomasFAV/LiLTInvoiceCzechV0) on an unknown dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0473 |
| - Precision: 0.8752 |
| - Recall: 0.8976 |
| - F1: 0.8863 |
| - Accuracy: 0.9899 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 3e-05 |
| - train_batch_size: 16 |
| - eval_batch_size: 2 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: linear |
| - lr_scheduler_warmup_steps: 0.1 |
| - num_epochs: 20 |
| - mixed_precision_training: Native AMP |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
| |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
| | No log | 1.0 | 12 | 0.0974 | 0.7395 | 0.7218 | 0.7306 | 0.9775 | |
| | No log | 2.0 | 24 | 0.0776 | 0.6830 | 0.7867 | 0.7312 | 0.9764 | |
| | No log | 3.0 | 36 | 0.0662 | 0.7488 | 0.7884 | 0.7681 | 0.9809 | |
| | No log | 4.0 | 48 | 0.0576 | 0.7648 | 0.8823 | 0.8193 | 0.9836 | |
| | No log | 5.0 | 60 | 0.0498 | 0.825 | 0.8447 | 0.8347 | 0.9862 | |
| | No log | 6.0 | 72 | 0.0495 | 0.8102 | 0.8379 | 0.8238 | 0.9861 | |
| | No log | 7.0 | 84 | 0.0500 | 0.8078 | 0.9181 | 0.8594 | 0.9871 | |
| | No log | 8.0 | 96 | 0.0454 | 0.8629 | 0.8805 | 0.8716 | 0.9890 | |
| | No log | 9.0 | 108 | 0.0444 | 0.8479 | 0.8942 | 0.8704 | 0.9892 | |
| | No log | 10.0 | 120 | 0.0467 | 0.8344 | 0.9113 | 0.8711 | 0.9887 | |
| | No log | 11.0 | 132 | 0.0457 | 0.8509 | 0.8959 | 0.8728 | 0.9892 | |
| | No log | 12.0 | 144 | 0.0450 | 0.8553 | 0.8976 | 0.8759 | 0.9893 | |
| | No log | 13.0 | 156 | 0.0463 | 0.8719 | 0.8942 | 0.8829 | 0.9897 | |
| | No log | 14.0 | 168 | 0.0474 | 0.8555 | 0.8993 | 0.8769 | 0.9894 | |
| | No log | 15.0 | 180 | 0.0468 | 0.8765 | 0.8959 | 0.8861 | 0.9897 | |
| | No log | 16.0 | 192 | 0.0473 | 0.8752 | 0.8976 | 0.8863 | 0.9899 | |
| | No log | 17.0 | 204 | 0.0467 | 0.8731 | 0.8925 | 0.8827 | 0.9896 | |
| | No log | 18.0 | 216 | 0.0473 | 0.8709 | 0.8976 | 0.8840 | 0.9897 | |
| | No log | 19.0 | 228 | 0.0474 | 0.8746 | 0.8925 | 0.8834 | 0.9897 | |
| | No log | 20.0 | 240 | 0.0473 | 0.8763 | 0.8942 | 0.8851 | 0.9897 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 5.0.0 |
| - Pytorch 2.10.0+cu128 |
| - Datasets 4.0.0 |
| - Tokenizers 0.22.2 |
|
|