MarkGG commited on
Commit
905323f
·
1 Parent(s): e69c69d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -52
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 4.7172
18
 
19
  ## Model description
20
 
@@ -49,61 +49,61 @@ The following hyperparameters were used during training:
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
- | No log | 0.97 | 29 | 9.9168 |
53
- | No log | 1.97 | 58 | 9.1863 |
54
- | No log | 2.97 | 87 | 8.5908 |
55
- | No log | 3.97 | 116 | 8.2154 |
56
- | No log | 4.97 | 145 | 7.8435 |
57
- | No log | 5.97 | 174 | 7.5032 |
58
- | No log | 6.97 | 203 | 7.2090 |
59
- | No log | 7.97 | 232 | 6.9060 |
60
- | No log | 8.97 | 261 | 6.6004 |
61
- | No log | 9.97 | 290 | 6.3097 |
62
- | No log | 10.97 | 319 | 6.0465 |
63
- | No log | 11.97 | 348 | 5.8247 |
64
- | No log | 12.97 | 377 | 5.6238 |
65
- | No log | 13.97 | 406 | 5.4851 |
66
- | No log | 14.97 | 435 | 5.3677 |
67
- | No log | 15.97 | 464 | 5.2774 |
68
- | No log | 16.97 | 493 | 5.1951 |
69
- | No log | 17.97 | 522 | 5.1359 |
70
- | No log | 18.97 | 551 | 5.0736 |
71
- | No log | 19.97 | 580 | 5.0316 |
72
- | No log | 20.97 | 609 | 4.9819 |
73
- | No log | 21.97 | 638 | 4.9393 |
74
- | No log | 22.97 | 667 | 4.9036 |
75
- | No log | 23.97 | 696 | 4.8656 |
76
- | No log | 24.97 | 725 | 4.8358 |
77
- | No log | 25.97 | 754 | 4.8175 |
78
- | No log | 26.97 | 783 | 4.7920 |
79
- | No log | 27.97 | 812 | 4.7723 |
80
- | No log | 28.97 | 841 | 4.7480 |
81
- | No log | 29.97 | 870 | 4.7319 |
82
- | No log | 30.97 | 899 | 4.7126 |
83
- | No log | 31.97 | 928 | 4.7008 |
84
- | No log | 32.97 | 957 | 4.6873 |
85
- | No log | 33.97 | 986 | 4.6748 |
86
- | No log | 34.97 | 1015 | 4.6707 |
87
- | No log | 35.97 | 1044 | 4.6692 |
88
- | No log | 36.97 | 1073 | 4.6674 |
89
- | No log | 37.97 | 1102 | 4.6582 |
90
- | No log | 38.97 | 1131 | 4.6668 |
91
- | No log | 39.97 | 1160 | 4.6745 |
92
- | No log | 40.97 | 1189 | 4.6718 |
93
- | No log | 41.97 | 1218 | 4.6790 |
94
- | No log | 42.97 | 1247 | 4.6827 |
95
- | No log | 43.97 | 1276 | 4.6932 |
96
- | No log | 44.97 | 1305 | 4.7028 |
97
- | No log | 45.97 | 1334 | 4.7095 |
98
- | No log | 46.97 | 1363 | 4.7136 |
99
- | No log | 47.97 | 1392 | 4.7162 |
100
- | No log | 48.97 | 1421 | 4.7170 |
101
- | No log | 49.97 | 1450 | 4.7172 |
102
 
103
 
104
  ### Framework versions
105
 
106
- - Transformers 4.23.1
107
  - Pytorch 1.12.1+cu113
108
  - Datasets 2.6.1
109
  - Tokenizers 0.13.1
 
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 4.7252
18
 
19
  ## Model description
20
 
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
+ | No log | 0.97 | 29 | 9.9449 |
53
+ | No log | 1.97 | 58 | 9.1764 |
54
+ | No log | 2.97 | 87 | 8.6054 |
55
+ | No log | 3.97 | 116 | 8.2512 |
56
+ | No log | 4.97 | 145 | 7.8647 |
57
+ | No log | 5.97 | 174 | 7.5154 |
58
+ | No log | 6.97 | 203 | 7.2125 |
59
+ | No log | 7.97 | 232 | 6.9063 |
60
+ | No log | 8.97 | 261 | 6.5997 |
61
+ | No log | 9.97 | 290 | 6.3070 |
62
+ | No log | 10.97 | 319 | 6.0437 |
63
+ | No log | 11.97 | 348 | 5.8185 |
64
+ | No log | 12.97 | 377 | 5.6224 |
65
+ | No log | 13.97 | 406 | 5.4789 |
66
+ | No log | 14.97 | 435 | 5.3658 |
67
+ | No log | 15.97 | 464 | 5.2683 |
68
+ | No log | 16.97 | 493 | 5.1975 |
69
+ | No log | 17.97 | 522 | 5.1404 |
70
+ | No log | 18.97 | 551 | 5.0764 |
71
+ | No log | 19.97 | 580 | 5.0289 |
72
+ | No log | 20.97 | 609 | 4.9836 |
73
+ | No log | 21.97 | 638 | 4.9436 |
74
+ | No log | 22.97 | 667 | 4.9065 |
75
+ | No log | 23.97 | 696 | 4.8741 |
76
+ | No log | 24.97 | 725 | 4.8428 |
77
+ | No log | 25.97 | 754 | 4.8125 |
78
+ | No log | 26.97 | 783 | 4.7892 |
79
+ | No log | 27.97 | 812 | 4.7697 |
80
+ | No log | 28.97 | 841 | 4.7448 |
81
+ | No log | 29.97 | 870 | 4.7373 |
82
+ | No log | 30.97 | 899 | 4.7162 |
83
+ | No log | 31.97 | 928 | 4.7021 |
84
+ | No log | 32.97 | 957 | 4.6920 |
85
+ | No log | 33.97 | 986 | 4.6819 |
86
+ | No log | 34.97 | 1015 | 4.6780 |
87
+ | No log | 35.97 | 1044 | 4.6786 |
88
+ | No log | 36.97 | 1073 | 4.6745 |
89
+ | No log | 37.97 | 1102 | 4.6674 |
90
+ | No log | 38.97 | 1131 | 4.6693 |
91
+ | No log | 39.97 | 1160 | 4.6820 |
92
+ | No log | 40.97 | 1189 | 4.6818 |
93
+ | No log | 41.97 | 1218 | 4.6769 |
94
+ | No log | 42.97 | 1247 | 4.6867 |
95
+ | No log | 43.97 | 1276 | 4.6984 |
96
+ | No log | 44.97 | 1305 | 4.7064 |
97
+ | No log | 45.97 | 1334 | 4.7161 |
98
+ | No log | 46.97 | 1363 | 4.7227 |
99
+ | No log | 47.97 | 1392 | 4.7229 |
100
+ | No log | 48.97 | 1421 | 4.7256 |
101
+ | No log | 49.97 | 1450 | 4.7252 |
102
 
103
 
104
  ### Framework versions
105
 
106
+ - Transformers 4.24.0
107
  - Pytorch 1.12.1+cu113
108
  - Datasets 2.6.1
109
  - Tokenizers 0.13.1