appearance
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6381
- Accuracy: 0.9725
- F1 Macro: 0.9510
- Precision Macro: 0.9484
- Recall Macro: 0.9537
- Total Tf: [1558, 44, 1558, 44]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 212
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
|---|---|---|---|---|---|---|---|---|
| 0.2724 | 1.0 | 213 | 0.2590 | 0.9588 | 0.9294 | 0.9118 | 0.9499 | [1536, 66, 1536, 66] |
| 0.2408 | 2.0 | 426 | 0.1592 | 0.9719 | 0.9518 | 0.9338 | 0.9727 | [1557, 45, 1557, 45] |
| 0.0874 | 3.0 | 639 | 0.2457 | 0.9682 | 0.9446 | 0.9320 | 0.9585 | [1551, 51, 1551, 51] |
| 0.0554 | 4.0 | 852 | 0.2417 | 0.9700 | 0.9484 | 0.9320 | 0.9671 | [1554, 48, 1554, 48] |
| 0.0369 | 5.0 | 1065 | 0.4010 | 0.9738 | 0.9531 | 0.9518 | 0.9544 | [1560, 42, 1560, 42] |
| 0.0421 | 6.0 | 1278 | 0.4548 | 0.9757 | 0.9564 | 0.9557 | 0.9571 | [1563, 39, 1563, 39] |
| 0.0449 | 7.0 | 1491 | 0.5434 | 0.9763 | 0.9570 | 0.9613 | 0.9530 | [1564, 38, 1564, 38] |
| 0.0336 | 8.0 | 1704 | 0.6023 | 0.9732 | 0.9515 | 0.9549 | 0.9481 | [1559, 43, 1559, 43] |
| 0.0175 | 9.0 | 1917 | 0.5159 | 0.9725 | 0.9511 | 0.9472 | 0.9552 | [1558, 44, 1558, 44] |
| 0.0061 | 10.0 | 2130 | 0.6394 | 0.9732 | 0.9516 | 0.9537 | 0.9496 | [1559, 43, 1559, 43] |
| 0.0016 | 11.0 | 2343 | 0.4762 | 0.9732 | 0.9523 | 0.9478 | 0.9570 | [1559, 43, 1559, 43] |
| 0.0246 | 12.0 | 2556 | 0.5617 | 0.9744 | 0.9544 | 0.9511 | 0.9578 | [1561, 41, 1561, 41] |
| 0.0101 | 13.0 | 2769 | 0.6450 | 0.9738 | 0.9531 | 0.9518 | 0.9544 | [1560, 42, 1560, 42] |
| 0.0004 | 14.0 | 2982 | 0.6274 | 0.9725 | 0.9510 | 0.9484 | 0.9537 | [1558, 44, 1558, 44] |
| 0.0049 | 15.0 | 3195 | 0.6381 | 0.9725 | 0.9510 | 0.9484 | 0.9537 | [1558, 44, 1558, 44] |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support