dtorber commited on
Commit
5c0ad14
·
verified ·
1 Parent(s): 98155d7

Model save

Browse files
Files changed (2) hide show
  1. README.md +20 -16
  2. model.safetensors +1 -1
README.md CHANGED
@@ -14,10 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [NLP-LTU/bertweet-large-sexism-detector](https://huggingface.co/NLP-LTU/bertweet-large-sexism-detector) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 1.1313
18
- - Icm: 0.1644
19
- - Icmnorm: 0.5834
20
- - Fmeasure: 0.7248
21
 
22
  ## Model description
23
 
@@ -36,31 +36,35 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 1e-06
40
  - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
  - distributed_type: multi-GPU
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 6
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss | Icm | Icmnorm | Fmeasure |
52
- |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
53
- | 0.8513 | 1.0 | 1625 | 0.9252 | 0.1119 | 0.5568 | 0.7073 |
54
- | 0.872 | 2.0 | 3250 | 1.1313 | 0.1644 | 0.5834 | 0.7248 |
55
- | 0.9077 | 3.0 | 4875 | 1.2531 | 0.1286 | 0.5653 | 0.7128 |
56
- | 0.7907 | 4.0 | 6500 | 1.3570 | 0.1644 | 0.5834 | 0.7248 |
57
- | 0.6587 | 5.0 | 8125 | 1.4076 | 0.1644 | 0.5834 | 0.7248 |
58
- | 0.6787 | 6.0 | 9750 | 1.4216 | 0.1644 | 0.5834 | 0.7248 |
 
 
 
 
59
 
60
 
61
  ### Framework versions
62
 
63
- - Transformers 4.39.3
64
- - Pytorch 2.2.2+cu121
65
  - Datasets 2.18.0
66
  - Tokenizers 0.15.2
 
14
 
15
  This model is a fine-tuned version of [NLP-LTU/bertweet-large-sexism-detector](https://huggingface.co/NLP-LTU/bertweet-large-sexism-detector) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.9648
18
+ - Icm: 0.2479
19
+ - Icmnorm: 0.6258
20
+ - Fmeasure: 0.7526
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 2e-05
40
  - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
  - distributed_type: multi-GPU
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 10
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss | Icm | Icmnorm | Fmeasure |
52
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:--------:|
53
+ | 0.8774 | 1.0 | 771 | 1.1253 | -0.1346 | 0.4317 | 0.5634 |
54
+ | 0.9551 | 2.0 | 1542 | 0.8275 | 0.1264 | 0.5641 | 0.7110 |
55
+ | 0.9559 | 3.0 | 2313 | 0.9648 | 0.2479 | 0.6258 | 0.7526 |
56
+ | 0.6926 | 4.0 | 3084 | 1.5632 | 0.1570 | 0.5797 | 0.7172 |
57
+ | 0.4547 | 5.0 | 3855 | 1.8028 | 0.1284 | 0.5652 | 0.7098 |
58
+ | 0.2611 | 6.0 | 4626 | 1.9528 | 0.2025 | 0.6027 | 0.7359 |
59
+ | 0.1528 | 7.0 | 5397 | 2.1400 | 0.1119 | 0.5568 | 0.7073 |
60
+ | 0.1173 | 8.0 | 6168 | 2.1909 | 0.1524 | 0.5773 | 0.7195 |
61
+ | 0.1096 | 9.0 | 6939 | 2.4630 | 0.1166 | 0.5591 | 0.7073 |
62
+ | 0.0535 | 10.0 | 7710 | 2.4917 | 0.1809 | 0.5918 | 0.7276 |
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.38.2
68
+ - Pytorch 2.3.0+cu121
69
  - Datasets 2.18.0
70
  - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:05ba428bd5cf9576e379d31b8f576d832a156afb6853bc745186fc35ebea8937
3
  size 1421495416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3861b7096fe52590336511715040b68634f02fc325addd66c125ed15411284f
3
  size 1421495416