Theoreticallyhugo commited on
Commit
f8eee7e
·
verified ·
1 Parent(s): 41a6edf

trainer: training complete at 2024-02-06 13:20:21.536579.

Browse files
Files changed (2) hide show
  1. README.md +15 -15
  2. model.safetensors +1 -1
README.md CHANGED
@@ -17,13 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.8493
21
- - Claim: {'precision': 0.41935483870967744, 'recall': 0.4513888888888889, 'f1-score': 0.4347826086956522, 'support': 144.0}
22
  - Majorclaim: {'precision': 0.6923076923076923, 'recall': 0.5, 'f1-score': 0.5806451612903226, 'support': 72.0}
23
- - Premise: {'precision': 0.8009950248756219, 'recall': 0.8193384223918575, 'f1-score': 0.8100628930817609, 'support': 393.0}
24
- - Accuracy: 0.6946
25
- - Macro avg: {'precision': 0.6375525186309973, 'recall': 0.5902424370935822, 'f1-score': 0.6084968876892453, 'support': 609.0}
26
- - Weighted avg: {'precision': 0.6979052469564315, 'recall': 0.6945812807881774, 'f1-score': 0.694203389566846, 'support': 609.0}
27
 
28
  ## Model description
29
 
@@ -52,16 +52,16 @@ The following hyperparameters were used during training:
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | Premise | Accuracy | Macro avg | Weighted avg |
56
- |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------:|:-----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|
57
- | 0.7306 | 1.0 | 533 | 0.6308 | {'precision': 0.5625, 'recall': 0.25, 'f1-score': 0.34615384615384615, 'support': 144.0} | {'precision': 0.6086956521739131, 'recall': 0.5833333333333334, 'f1-score': 0.5957446808510638, 'support': 72.0} | {'precision': 0.7647058823529411, 'recall': 0.926208651399491, 'f1-score': 0.8377445339470655, 'support': 393.0} | 0.7258 | {'precision': 0.6453005115089514, 'recall': 0.5865139949109415, 'f1-score': 0.5932143536506586, 'support': 609.0} | {'precision': 0.6984490947803409, 'recall': 0.7257799671592775, 'f1-score': 0.6928955216890429, 'support': 609.0} |
58
- | 0.5343 | 2.0 | 1066 | 0.6586 | {'precision': 0.45132743362831856, 'recall': 0.3541666666666667, 'f1-score': 0.39688715953307396, 'support': 144.0} | {'precision': 0.8, 'recall': 0.5, 'f1-score': 0.6153846153846154, 'support': 72.0} | {'precision': 0.7782705099778271, 'recall': 0.8931297709923665, 'f1-score': 0.8317535545023698, 'support': 393.0} | 0.7192 | {'precision': 0.6765326478687154, 'recall': 0.5824321458863444, 'f1-score': 0.6146751098066864, 'support': 609.0} | {'precision': 0.7035327764593825, 'recall': 0.7192118226600985, 'f1-score': 0.7033474387518659, 'support': 609.0} |
59
- | 0.3563 | 3.0 | 1599 | 0.8493 | {'precision': 0.41935483870967744, 'recall': 0.4513888888888889, 'f1-score': 0.4347826086956522, 'support': 144.0} | {'precision': 0.6923076923076923, 'recall': 0.5, 'f1-score': 0.5806451612903226, 'support': 72.0} | {'precision': 0.8009950248756219, 'recall': 0.8193384223918575, 'f1-score': 0.8100628930817609, 'support': 393.0} | 0.6946 | {'precision': 0.6375525186309973, 'recall': 0.5902424370935822, 'f1-score': 0.6084968876892453, 'support': 609.0} | {'precision': 0.6979052469564315, 'recall': 0.6945812807881774, 'f1-score': 0.694203389566846, 'support': 609.0} |
60
 
61
 
62
  ### Framework versions
63
 
64
- - Transformers 4.33.0
65
- - Pytorch 2.0.1+cu118
66
- - Datasets 2.14.4
67
- - Tokenizers 0.13.3
 
17
 
18
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.8463
21
+ - Claim: {'precision': 0.4140127388535032, 'recall': 0.4513888888888889, 'f1-score': 0.4318936877076412, 'support': 144.0}
22
  - Majorclaim: {'precision': 0.6923076923076923, 'recall': 0.5, 'f1-score': 0.5806451612903226, 'support': 72.0}
23
+ - Premise: {'precision': 0.8025, 'recall': 0.816793893129771, 'f1-score': 0.8095838587641867, 'support': 393.0}
24
+ - Accuracy: 0.6929
25
+ - Macro avg: {'precision': 0.6362734770537318, 'recall': 0.5893942606728867, 'f1-score': 0.6073742359207168, 'support': 609.0}
26
+ - Weighted avg: {'precision': 0.6976132811840038, 'recall': 0.6929392446633826, 'f1-score': 0.6932111644287832, 'support': 609.0}
27
 
28
  ## Model description
29
 
 
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | Premise | Accuracy | Macro avg | Weighted avg |
56
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------:|:-----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|
57
+ | 0.7343 | 1.0 | 533 | 0.6230 | {'precision': 0.47058823529411764, 'recall': 0.2777777777777778, 'f1-score': 0.3493449781659389, 'support': 144.0} | {'precision': 0.5647058823529412, 'recall': 0.6666666666666666, 'f1-score': 0.6114649681528662, 'support': 72.0} | {'precision': 0.7790432801822323, 'recall': 0.8702290076335878, 'f1-score': 0.8221153846153846, 'support': 393.0} | 0.7061 | {'precision': 0.6047791326097637, 'recall': 0.6048911506926774, 'f1-score': 0.5943084436447299, 'support': 609.0} | {'precision': 0.6807677151451265, 'recall': 0.7060755336617406, 'f1-score': 0.6854228254790602, 'support': 609.0} |
58
+ | 0.5313 | 2.0 | 1066 | 0.6606 | {'precision': 0.4491525423728814, 'recall': 0.3680555555555556, 'f1-score': 0.4045801526717558, 'support': 144.0} | {'precision': 0.6612903225806451, 'recall': 0.5694444444444444, 'f1-score': 0.6119402985074627, 'support': 72.0} | {'precision': 0.7878787878787878, 'recall': 0.8600508905852418, 'f1-score': 0.8223844282238443, 'support': 393.0} | 0.7094 | {'precision': 0.6327738842774381, 'recall': 0.5991836301950806, 'f1-score': 0.6129682931343542, 'support': 609.0} | {'precision': 0.6928197585613547, 'recall': 0.7093596059113301, 'f1-score': 0.6987131753189507, 'support': 609.0} |
59
+ | 0.3551 | 3.0 | 1599 | 0.8463 | {'precision': 0.4140127388535032, 'recall': 0.4513888888888889, 'f1-score': 0.4318936877076412, 'support': 144.0} | {'precision': 0.6923076923076923, 'recall': 0.5, 'f1-score': 0.5806451612903226, 'support': 72.0} | {'precision': 0.8025, 'recall': 0.816793893129771, 'f1-score': 0.8095838587641867, 'support': 393.0} | 0.6929 | {'precision': 0.6362734770537318, 'recall': 0.5893942606728867, 'f1-score': 0.6073742359207168, 'support': 609.0} | {'precision': 0.6976132811840038, 'recall': 0.6929392446633826, 'f1-score': 0.6932111644287832, 'support': 609.0} |
60
 
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.37.1
65
+ - Pytorch 2.1.2+cu121
66
+ - Datasets 2.16.1
67
+ - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:582eb75d677ca53df2db52fa6154a93e6e7f7808ec59dd7821bddf35453ef789
3
  size 430911284
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4492e9b31307da98b9aea444636854087e7155ef4648b29a0b320085cad2cc61
3
  size 430911284