Relacosm commited on
Commit
ae71f03
·
verified ·
1 Parent(s): a0f2521

Anti-overfitting 5-class sentiment model

Browse files
Files changed (2) hide show
  1. README.md +13 -10
  2. model.safetensors +1 -1
README.md CHANGED
@@ -18,10 +18,10 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0871
22
- - Accuracy: 1.0
23
- - F1 Macro: 1.0
24
- - F1 Weighted: 1.0
25
 
26
  ## Model description
27
 
@@ -49,21 +49,24 @@ The following hyperparameters were used during training:
49
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_ratio: 0.1
52
- - num_epochs: 3
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|
59
- | 1.2197 | 0.8 | 20 | 0.9300 | 0.63 | 0.5436 | 0.5436 |
60
- | 0.6318 | 1.6 | 40 | 0.3923 | 0.85 | 0.8255 | 0.8255 |
61
- | 0.3146 | 2.4 | 60 | 0.0871 | 1.0 | 1.0 | 1.0 |
 
 
 
62
 
63
 
64
  ### Framework versions
65
 
66
- - Transformers 4.55.4
67
  - Pytorch 2.8.0+cu126
68
  - Datasets 4.0.0
69
- - Tokenizers 0.21.4
 
18
 
19
  This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1496
22
+ - Accuracy: 0.956
23
+ - F1 Macro: 0.9544
24
+ - F1 Weighted: 0.9553
25
 
26
  ## Model description
27
 
 
49
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 1
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|
59
+ | 1.3595 | 0.16 | 20 | 1.1077 | 0.594 | 0.5350 | 0.5278 |
60
+ | 0.9369 | 0.32 | 40 | 0.5998 | 0.822 | 0.8158 | 0.8165 |
61
+ | 0.6807 | 0.48 | 60 | 0.3187 | 0.93 | 0.9277 | 0.9292 |
62
+ | 0.4498 | 0.64 | 80 | 0.2548 | 0.92 | 0.9189 | 0.9192 |
63
+ | 0.3857 | 0.8 | 100 | 0.1801 | 0.95 | 0.9490 | 0.9499 |
64
+ | 0.3638 | 0.96 | 120 | 0.1496 | 0.956 | 0.9544 | 0.9553 |
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.56.1
70
  - Pytorch 2.8.0+cu126
71
  - Datasets 4.0.0
72
+ - Tokenizers 0.22.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1c8c992822fe6cc9cd90b513a2ef7537e93ca42e542b2893a77329a59136cf8
3
  size 498622052
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19561d3b839ddbe757a9f09c9093292e4225cbe87f7342bda0f6cb7d3e6b9739
3
  size 498622052