finetuned_model_emotion_detection
This model is a fine-tuned version of jhu-clsp/mmBERT-base on the SemEval 2018 dataset. It achieves the following results on the evaluation set:
- Loss: 0.3498
- F1 Macro: 0.5016
Model description
Finetuned version of Modern Bert For Sequence Classification
Training and evaluation data
'test_loss': 0.3414841294288635
'test_f1_macro': 0.5195012309679227
Metrics
precision recall f1-score support
anger 0.75 0.73 0.74 919
anticipation 0.59 0.37 0.46 321
disgust 0.53 0.40 0.46 423
fear 0.77 0.62 0.69 298
joy 0.84 0.82 0.83 873
love 0.76 0.56 0.64 245
optimism 0.53 0.36 0.43 278
pessimism 0.53 0.41 0.46 495
sadness 0.71 0.67 0.69 644
surprise 0.44 0.21 0.29 122
trust 0.49 0.20 0.29 122
micro avg 0.70 0.59 0.64 4740
macro avg 0.63 0.49 0.54 4740
weighted avg 0.68 0.59 0.63 4740
samples avg 0.66 0.61 0.60 4740
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro |
|---|---|---|---|---|
| No log | 1.0 | 223 | 0.2726 | 0.4179 |
| No log | 2.0 | 446 | 0.2680 | 0.4866 |
| 0.2574 | 3.0 | 669 | 0.3498 | 0.5016 |
Framework versions
- Transformers 5.3.0
- Pytorch 2.10.0+cu128
- Datasets 4.7.0
- Tokenizers 0.22.2
- Downloads last month
- 53
Model tree for Jeanievas/finetuned_model_emotion_detection
Base model
jhu-clsp/mmBERT-base