yigagilbert commited on
Commit
dba55e6
·
verified ·
1 Parent(s): 582a3e6

yigagilbert/google_t5_language_ID

Browse files
Files changed (3) hide show
  1. README.md +24 -18
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -12,8 +12,8 @@ model-index:
12
  - name: google_t5_language_ID
13
  results:
14
  - task:
15
- type: text2text-generation
16
  name: Sequence-to-sequence Language Modeling
 
17
  dataset:
18
  name: generator
19
  type: generator
@@ -21,9 +21,9 @@ model-index:
21
  split: train
22
  args: default
23
  metrics:
24
- - type: accuracy
25
- value: 0.6282717434980809
26
- name: Accuracy
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,12 +33,12 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the generator dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.4188
37
- - Accuracy: 0.6283
38
- - F1 Macro: 0.5525
39
- - F1 Weighted: 0.5729
40
- - Precision Macro: 0.6438
41
- - Recall Macro: 0.6058
42
 
43
  ## Model description
44
 
@@ -72,14 +72,20 @@ The following hyperparameters were used during training:
72
 
73
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted | Precision Macro | Recall Macro |
74
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:------------:|
75
- | 0.1526 | 0.0083 | 500 | 0.6953 | 0.4727 | 0.4232 | 0.4388 | 0.5151 | 0.4558 |
76
- | 0.0743 | 0.0167 | 1000 | 0.9594 | 0.3863 | 0.3125 | 0.3125 | 0.4760 | 0.3863 |
77
- | 0.0491 | 0.025 | 1500 | 0.9981 | 0.4410 | 0.3424 | 0.3677 | 0.5306 | 0.4106 |
78
- | 0.047 | 0.0333 | 2000 | 0.5014 | 0.6161 | 0.5001 | 0.5556 | 0.5811 | 0.5544 |
79
- | 0.0559 | 0.0417 | 2500 | 0.4182 | 0.6452 | 0.5724 | 0.5936 | 0.6326 | 0.6222 |
80
- | 0.023 | 0.05 | 3000 | 0.5246 | 0.5914 | 0.5244 | 0.5439 | 0.5739 | 0.5703 |
81
- | 0.0432 | 0.0583 | 3500 | 0.4539 | 0.6216 | 0.5600 | 0.5807 | 0.6434 | 0.5994 |
82
- | 0.0601 | 0.0667 | 4000 | 0.4188 | 0.6283 | 0.5525 | 0.5729 | 0.6438 | 0.6058 |
 
 
 
 
 
 
83
 
84
 
85
  ### Framework versions
 
12
  - name: google_t5_language_ID
13
  results:
14
  - task:
 
15
  name: Sequence-to-sequence Language Modeling
16
+ type: text2text-generation
17
  dataset:
18
  name: generator
19
  type: generator
 
21
  split: train
22
  args: default
23
  metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.6179074697593216
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the generator dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.5429
37
+ - Accuracy: 0.6179
38
+ - F1 Macro: 0.3389
39
+ - F1 Weighted: 0.5774
40
+ - Precision Macro: 0.3873
41
+ - Recall Macro: 0.3627
42
 
43
  ## Model description
44
 
 
72
 
73
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted | Precision Macro | Recall Macro |
74
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:------------:|
75
+ | 0.1943 | 0.0083 | 500 | 0.6981 | 0.4018 | 0.3139 | 0.3488 | 0.4624 | 0.3616 |
76
+ | 0.0812 | 0.0167 | 1000 | 0.7371 | 0.4086 | 0.3323 | 0.3446 | 0.5179 | 0.3940 |
77
+ | 0.049 | 0.025 | 1500 | 0.7806 | 0.4534 | 0.3793 | 0.3793 | 0.5316 | 0.4534 |
78
+ | 0.0518 | 0.0333 | 2000 | 0.5042 | 0.5845 | 0.5071 | 0.5258 | 0.5576 | 0.5637 |
79
+ | 0.0452 | 0.0417 | 2500 | 0.5120 | 0.6204 | 0.5554 | 0.5554 | 0.6496 | 0.6204 |
80
+ | 0.0288 | 0.05 | 3000 | 0.4798 | 0.6018 | 0.5230 | 0.5618 | 0.6077 | 0.5603 |
81
+ | 0.0341 | 0.0583 | 3500 | 0.4764 | 0.6098 | 0.5456 | 0.5658 | 0.6528 | 0.5881 |
82
+ | 0.0762 | 0.0667 | 4000 | 0.4389 | 0.6251 | 0.5296 | 0.5688 | 0.6091 | 0.5820 |
83
+ | 0.0189 | 0.075 | 4500 | 0.4167 | 0.6681 | 0.6068 | 0.6068 | 0.7167 | 0.6681 |
84
+ | 0.0235 | 0.0833 | 5000 | 0.4673 | 0.6599 | 0.6018 | 0.6018 | 0.7393 | 0.6599 |
85
+ | 0.0274 | 0.0917 | 5500 | 0.3304 | 0.6958 | 0.6102 | 0.6555 | 0.6868 | 0.6478 |
86
+ | 0.0198 | 0.1 | 6000 | 0.4752 | 0.6569 | 0.5877 | 0.6095 | 0.7165 | 0.6335 |
87
+ | 0.0246 | 0.1083 | 6500 | 0.4657 | 0.6540 | 0.5800 | 0.6015 | 0.6400 | 0.6306 |
88
+ | 0.0241 | 0.1167 | 7000 | 0.5429 | 0.6179 | 0.3389 | 0.5774 | 0.3873 | 0.3627 |
89
 
90
 
91
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0422e86f92c34efdc57070a032bbd7e9b5354951467190c30006cd70c1fd5443
3
  size 891644712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d794755db7eebd6cddacca0393de71c2ece917fff1d969f2e3437855d4b4245
3
  size 891644712
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf2aaf419bc4480659dff62183a81dc0fd7d28b54adf30918878b382e59238c7
3
  size 6033
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86f427897e286675bf89c8e8088e6886f39ea29768b2634712a018b0667e9355
3
  size 6033