MARBERT-fold5
This model is a fine-tuned version of UBC-NLP/MARBERT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.1994
- Accuracy: 0.7917
- Macro F1: 0.7921
- Weighted F1: 0.7917
- F1 Pro: 0.7850
- F1 Against: 0.7874
- F1 Neutral: 0.8039
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | Weighted F1 | F1 Pro | F1 Against | F1 Neutral |
|---|---|---|---|---|---|---|---|---|---|
| 0.9663 | 1.1628 | 50 | 0.7450 | 0.7143 | 0.7149 | 0.7149 | 0.7216 | 0.7101 | 0.7129 |
| 0.5286 | 2.3256 | 100 | 0.6924 | 0.7381 | 0.7381 | 0.7379 | 0.7586 | 0.7179 | 0.7379 |
| 0.2872 | 3.4884 | 150 | 0.7109 | 0.7619 | 0.7620 | 0.7628 | 0.7677 | 0.7769 | 0.7414 |
| 0.1544 | 4.6512 | 200 | 0.9794 | 0.7679 | 0.7676 | 0.7677 | 0.7611 | 0.7742 | 0.7677 |
| 0.1275 | 5.8140 | 250 | 1.1354 | 0.7619 | 0.7626 | 0.7622 | 0.7748 | 0.7438 | 0.7692 |
| 0.0587 | 6.9767 | 300 | 1.1990 | 0.7917 | 0.7921 | 0.7917 | 0.7850 | 0.7874 | 0.8039 |
| 0.0195 | 8.1395 | 350 | 1.3248 | 0.7738 | 0.7744 | 0.7738 | 0.7719 | 0.7627 | 0.7885 |
| 0.0241 | 9.3023 | 400 | 1.3347 | 0.7798 | 0.7803 | 0.7799 | 0.7719 | 0.7769 | 0.7921 |
Framework versions
- Transformers 5.0.0
- Pytorch 2.10.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.2
- Downloads last month
- 50
Model tree for aomar85/MARBERT-fold5
Base model
UBC-NLP/MARBERT