update model card README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 14 |
|
| 15 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
-
- Loss: 4.
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
@@ -49,61 +49,61 @@ The following hyperparameters were used during training:
|
|
| 49 |
|
| 50 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 51 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 52 |
-
| No log | 0.97 | 29 | 9.
|
| 53 |
-
| No log | 1.97 | 58 | 9.
|
| 54 |
-
| No log | 2.97 | 87 | 8.
|
| 55 |
-
| No log | 3.97 | 116 | 8.
|
| 56 |
-
| No log | 4.97 | 145 | 7.
|
| 57 |
-
| No log | 5.97 | 174 | 7.
|
| 58 |
-
| No log | 6.97 | 203 | 7.
|
| 59 |
-
| No log | 7.97 | 232 | 6.
|
| 60 |
-
| No log | 8.97 | 261 | 6.
|
| 61 |
-
| No log | 9.97 | 290 | 6.
|
| 62 |
-
| No log | 10.97 | 319 | 6.
|
| 63 |
-
| No log | 11.97 | 348 | 5.
|
| 64 |
-
| No log | 12.97 | 377 | 5.
|
| 65 |
-
| No log | 13.97 | 406 | 5.
|
| 66 |
-
| No log | 14.97 | 435 | 5.
|
| 67 |
-
| No log | 15.97 | 464 | 5.
|
| 68 |
-
| No log | 16.97 | 493 | 5.
|
| 69 |
-
| No log | 17.97 | 522 | 5.
|
| 70 |
-
| No log | 18.97 | 551 | 5.
|
| 71 |
-
| No log | 19.97 | 580 | 5.
|
| 72 |
-
| No log | 20.97 | 609 | 4.
|
| 73 |
-
| No log | 21.97 | 638 | 4.
|
| 74 |
-
| No log | 22.97 | 667 | 4.
|
| 75 |
-
| No log | 23.97 | 696 | 4.
|
| 76 |
-
| No log | 24.97 | 725 | 4.
|
| 77 |
-
| No log | 25.97 | 754 | 4.
|
| 78 |
-
| No log | 26.97 | 783 | 4.
|
| 79 |
-
| No log | 27.97 | 812 | 4.
|
| 80 |
-
| No log | 28.97 | 841 | 4.
|
| 81 |
-
| No log | 29.97 | 870 | 4.
|
| 82 |
-
| No log | 30.97 | 899 | 4.
|
| 83 |
-
| No log | 31.97 | 928 | 4.
|
| 84 |
-
| No log | 32.97 | 957 | 4.
|
| 85 |
-
| No log | 33.97 | 986 | 4.
|
| 86 |
-
| No log | 34.97 | 1015 | 4.
|
| 87 |
-
| No log | 35.97 | 1044 | 4.
|
| 88 |
-
| No log | 36.97 | 1073 | 4.
|
| 89 |
-
| No log | 37.97 | 1102 | 4.
|
| 90 |
-
| No log | 38.97 | 1131 | 4.
|
| 91 |
-
| No log | 39.97 | 1160 | 4.
|
| 92 |
-
| No log | 40.97 | 1189 | 4.
|
| 93 |
-
| No log | 41.97 | 1218 | 4.
|
| 94 |
-
| No log | 42.97 | 1247 | 4.
|
| 95 |
-
| No log | 43.97 | 1276 | 4.
|
| 96 |
-
| No log | 44.97 | 1305 | 4.
|
| 97 |
-
| No log | 45.97 | 1334 | 4.
|
| 98 |
-
| No log | 46.97 | 1363 | 4.
|
| 99 |
-
| No log | 47.97 | 1392 | 4.
|
| 100 |
-
| No log | 48.97 | 1421 | 4.
|
| 101 |
-
| No log | 49.97 | 1450 | 4.
|
| 102 |
|
| 103 |
|
| 104 |
### Framework versions
|
| 105 |
|
| 106 |
-
- Transformers 4.
|
| 107 |
- Pytorch 1.12.1+cu113
|
| 108 |
- Datasets 2.6.1
|
| 109 |
- Tokenizers 0.13.1
|
|
|
|
| 14 |
|
| 15 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
+
- Loss: 4.7252
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
|
|
| 49 |
|
| 50 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 51 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 52 |
+
| No log | 0.97 | 29 | 9.9449 |
|
| 53 |
+
| No log | 1.97 | 58 | 9.1764 |
|
| 54 |
+
| No log | 2.97 | 87 | 8.6054 |
|
| 55 |
+
| No log | 3.97 | 116 | 8.2512 |
|
| 56 |
+
| No log | 4.97 | 145 | 7.8647 |
|
| 57 |
+
| No log | 5.97 | 174 | 7.5154 |
|
| 58 |
+
| No log | 6.97 | 203 | 7.2125 |
|
| 59 |
+
| No log | 7.97 | 232 | 6.9063 |
|
| 60 |
+
| No log | 8.97 | 261 | 6.5997 |
|
| 61 |
+
| No log | 9.97 | 290 | 6.3070 |
|
| 62 |
+
| No log | 10.97 | 319 | 6.0437 |
|
| 63 |
+
| No log | 11.97 | 348 | 5.8185 |
|
| 64 |
+
| No log | 12.97 | 377 | 5.6224 |
|
| 65 |
+
| No log | 13.97 | 406 | 5.4789 |
|
| 66 |
+
| No log | 14.97 | 435 | 5.3658 |
|
| 67 |
+
| No log | 15.97 | 464 | 5.2683 |
|
| 68 |
+
| No log | 16.97 | 493 | 5.1975 |
|
| 69 |
+
| No log | 17.97 | 522 | 5.1404 |
|
| 70 |
+
| No log | 18.97 | 551 | 5.0764 |
|
| 71 |
+
| No log | 19.97 | 580 | 5.0289 |
|
| 72 |
+
| No log | 20.97 | 609 | 4.9836 |
|
| 73 |
+
| No log | 21.97 | 638 | 4.9436 |
|
| 74 |
+
| No log | 22.97 | 667 | 4.9065 |
|
| 75 |
+
| No log | 23.97 | 696 | 4.8741 |
|
| 76 |
+
| No log | 24.97 | 725 | 4.8428 |
|
| 77 |
+
| No log | 25.97 | 754 | 4.8125 |
|
| 78 |
+
| No log | 26.97 | 783 | 4.7892 |
|
| 79 |
+
| No log | 27.97 | 812 | 4.7697 |
|
| 80 |
+
| No log | 28.97 | 841 | 4.7448 |
|
| 81 |
+
| No log | 29.97 | 870 | 4.7373 |
|
| 82 |
+
| No log | 30.97 | 899 | 4.7162 |
|
| 83 |
+
| No log | 31.97 | 928 | 4.7021 |
|
| 84 |
+
| No log | 32.97 | 957 | 4.6920 |
|
| 85 |
+
| No log | 33.97 | 986 | 4.6819 |
|
| 86 |
+
| No log | 34.97 | 1015 | 4.6780 |
|
| 87 |
+
| No log | 35.97 | 1044 | 4.6786 |
|
| 88 |
+
| No log | 36.97 | 1073 | 4.6745 |
|
| 89 |
+
| No log | 37.97 | 1102 | 4.6674 |
|
| 90 |
+
| No log | 38.97 | 1131 | 4.6693 |
|
| 91 |
+
| No log | 39.97 | 1160 | 4.6820 |
|
| 92 |
+
| No log | 40.97 | 1189 | 4.6818 |
|
| 93 |
+
| No log | 41.97 | 1218 | 4.6769 |
|
| 94 |
+
| No log | 42.97 | 1247 | 4.6867 |
|
| 95 |
+
| No log | 43.97 | 1276 | 4.6984 |
|
| 96 |
+
| No log | 44.97 | 1305 | 4.7064 |
|
| 97 |
+
| No log | 45.97 | 1334 | 4.7161 |
|
| 98 |
+
| No log | 46.97 | 1363 | 4.7227 |
|
| 99 |
+
| No log | 47.97 | 1392 | 4.7229 |
|
| 100 |
+
| No log | 48.97 | 1421 | 4.7256 |
|
| 101 |
+
| No log | 49.97 | 1450 | 4.7252 |
|
| 102 |
|
| 103 |
|
| 104 |
### Framework versions
|
| 105 |
|
| 106 |
+
- Transformers 4.24.0
|
| 107 |
- Pytorch 1.12.1+cu113
|
| 108 |
- Datasets 2.6.1
|
| 109 |
- Tokenizers 0.13.1
|