| | --- |
| | library_name: transformers |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - accuracy |
| | model-index: |
| | - name: rlcc-appearance-upsample_replacement-absa-max |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # rlcc-appearance-upsample_replacement-absa-max |
| | |
| | This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 1.5661 |
| | - Accuracy: 0.6220 |
| | - F1 Macro: 0.5882 |
| | - Precision Macro: 0.5858 |
| | - Recall Macro: 0.6126 |
| | - Total Tf: [255, 155, 1075, 155] |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 2e-05 |
| | - train_batch_size: 64 |
| | - eval_batch_size: 64 |
| | - seed: 42 |
| | - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| | - lr_scheduler_type: linear |
| | - lr_scheduler_warmup_steps: 65 |
| | - num_epochs: 25 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf | |
| | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:---------------------:| |
| | | 1.092 | 1.0 | 66 | 1.0985 | 0.5171 | 0.4268 | 0.3942 | 0.5033 | [212, 198, 1032, 198] | |
| | | 1.013 | 2.0 | 132 | 1.0659 | 0.6073 | 0.5068 | 0.5150 | 0.5646 | [249, 161, 1069, 161] | |
| | | 0.9201 | 3.0 | 198 | 1.0787 | 0.6341 | 0.5833 | 0.6272 | 0.6442 | [260, 150, 1080, 150] | |
| | | 0.7413 | 4.0 | 264 | 1.1163 | 0.6561 | 0.6226 | 0.6328 | 0.6475 | [269, 141, 1089, 141] | |
| | | 0.6606 | 5.0 | 330 | 1.2175 | 0.6439 | 0.6095 | 0.6147 | 0.6386 | [264, 146, 1084, 146] | |
| | | 0.5027 | 6.0 | 396 | 1.2477 | 0.6268 | 0.5918 | 0.5979 | 0.6114 | [257, 153, 1077, 153] | |
| | | 0.4779 | 7.0 | 462 | 1.2777 | 0.6488 | 0.6188 | 0.6159 | 0.6298 | [266, 144, 1086, 144] | |
| | | 0.3738 | 8.0 | 528 | 1.3978 | 0.6415 | 0.6096 | 0.6103 | 0.6312 | [263, 147, 1083, 147] | |
| | | 0.3518 | 9.0 | 594 | 1.5661 | 0.6220 | 0.5882 | 0.5858 | 0.6126 | [255, 155, 1075, 155] | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.47.0 |
| | - Pytorch 2.5.1+cu121 |
| | - Datasets 3.2.0 |
| | - Tokenizers 0.21.0 |
| |
|