t5-absa-plusplus-atsc
This model is a fine-tuned version of google-t5/t5-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4055
- F1: 58.38
- Precision: 57.62
- Recall: 59.15
- N Tp: 223
- N Pred: 387
- N Gold: 377
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 26
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | N Tp | N Pred | N Gold |
|---|---|---|---|---|---|---|---|---|---|
| 0.6296 | 4.3860 | 500 | 0.3825 | 53.96 | 54.62 | 53.32 | 201 | 368 | 377 |
| 0.3482 | 8.7719 | 1000 | 0.4051 | 58.27 | 57.66 | 58.89 | 222 | 385 | 377 |
| 0.3542 | 10.0 | 1140 | 0.4055 | 58.38 | 57.62 | 59.15 | 223 | 387 | 377 |
Framework versions
- Transformers 5.6.2
- Pytorch 2.11.0+cu130
- Datasets 4.8.4
- Tokenizers 0.22.2
- Downloads last month
- 17
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for tomvoelker/t5-absa-plusplus-atsc
Base model
google-t5/t5-small