update model card README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 14 |
|
| 15 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
-
- Loss: 5.
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
@@ -42,58 +42,68 @@ The following hyperparameters were used during training:
|
|
| 42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 43 |
- lr_scheduler_type: cosine
|
| 44 |
- lr_scheduler_warmup_steps: 1000
|
| 45 |
-
- num_epochs:
|
| 46 |
- mixed_precision_training: Native AMP
|
| 47 |
|
| 48 |
### Training results
|
| 49 |
|
| 50 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 51 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 52 |
-
| No log | 0.96 | 16 | 10.
|
| 53 |
-
| No log | 1.96 | 32 | 9.
|
| 54 |
-
| No log | 2.96 | 48 | 9.
|
| 55 |
-
| No log | 3.96 | 64 | 8.
|
| 56 |
-
| No log | 4.96 | 80 | 8.
|
| 57 |
-
| No log | 5.96 | 96 | 8.
|
| 58 |
-
| No log | 6.96 | 112 |
|
| 59 |
-
| No log | 7.96 | 128 | 7.
|
| 60 |
-
| No log | 8.96 | 144 | 7.
|
| 61 |
-
| No log | 9.96 | 160 | 7.
|
| 62 |
-
| No log | 10.96 | 176 | 7.
|
| 63 |
-
| No log | 11.96 | 192 | 7.
|
| 64 |
-
| No log | 12.96 | 208 | 6.
|
| 65 |
-
| No log | 13.96 | 224 | 6.
|
| 66 |
-
| No log | 14.96 | 240 | 6.
|
| 67 |
-
| No log | 15.96 | 256 | 6.
|
| 68 |
-
| No log | 16.96 | 272 | 6.
|
| 69 |
-
| No log | 17.96 | 288 | 6.
|
| 70 |
-
| No log | 18.96 | 304 |
|
| 71 |
-
| No log | 19.96 | 320 | 5.
|
| 72 |
-
| No log | 20.96 | 336 | 5.
|
| 73 |
-
| No log | 21.96 | 352 | 5.
|
| 74 |
-
| No log | 22.96 | 368 | 5.
|
| 75 |
-
| No log | 23.96 | 384 | 5.
|
| 76 |
-
| No log | 24.96 | 400 | 5.
|
| 77 |
-
| No log | 25.96 | 416 | 5.
|
| 78 |
-
| No log | 26.96 | 432 | 5.
|
| 79 |
-
| No log | 27.96 | 448 | 5.
|
| 80 |
-
| No log | 28.96 | 464 | 5.
|
| 81 |
-
| No log | 29.96 | 480 | 5.
|
| 82 |
-
| No log | 30.96 | 496 | 5.
|
| 83 |
-
| No log | 31.96 | 512 | 5.
|
| 84 |
-
| No log | 32.96 | 528 | 5.
|
| 85 |
-
| No log | 33.96 | 544 | 5.
|
| 86 |
-
| No log | 34.96 | 560 | 5.
|
| 87 |
-
| No log | 35.96 | 576 | 5.
|
| 88 |
-
| No log | 36.96 | 592 | 5.
|
| 89 |
-
| No log | 37.96 | 608 | 5.
|
| 90 |
-
| No log | 38.96 | 624 | 5.
|
| 91 |
-
| No log | 39.96 | 640 | 5.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
|
| 94 |
### Framework versions
|
| 95 |
|
| 96 |
-
- Transformers 4.
|
| 97 |
- Pytorch 1.12.1+cu113
|
| 98 |
- Datasets 2.6.1
|
| 99 |
- Tokenizers 0.13.1
|
|
|
|
| 14 |
|
| 15 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
+
- Loss: 5.0319
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
|
|
| 42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 43 |
- lr_scheduler_type: cosine
|
| 44 |
- lr_scheduler_warmup_steps: 1000
|
| 45 |
+
- num_epochs: 50
|
| 46 |
- mixed_precision_training: Native AMP
|
| 47 |
|
| 48 |
### Training results
|
| 49 |
|
| 50 |
| Training Loss | Epoch | Step | Validation Loss |
|
| 51 |
|:-------------:|:-----:|:----:|:---------------:|
|
| 52 |
+
| No log | 0.96 | 16 | 10.3553 |
|
| 53 |
+
| No log | 1.96 | 32 | 9.5625 |
|
| 54 |
+
| No log | 2.96 | 48 | 9.0898 |
|
| 55 |
+
| No log | 3.96 | 64 | 8.7852 |
|
| 56 |
+
| No log | 4.96 | 80 | 8.4694 |
|
| 57 |
+
| No log | 5.96 | 96 | 8.2122 |
|
| 58 |
+
| No log | 6.96 | 112 | 8.0040 |
|
| 59 |
+
| No log | 7.96 | 128 | 7.8029 |
|
| 60 |
+
| No log | 8.96 | 144 | 7.5950 |
|
| 61 |
+
| No log | 9.96 | 160 | 7.4081 |
|
| 62 |
+
| No log | 10.96 | 176 | 7.2391 |
|
| 63 |
+
| No log | 11.96 | 192 | 7.0784 |
|
| 64 |
+
| No log | 12.96 | 208 | 6.9139 |
|
| 65 |
+
| No log | 13.96 | 224 | 6.7530 |
|
| 66 |
+
| No log | 14.96 | 240 | 6.5983 |
|
| 67 |
+
| No log | 15.96 | 256 | 6.4403 |
|
| 68 |
+
| No log | 16.96 | 272 | 6.3025 |
|
| 69 |
+
| No log | 17.96 | 288 | 6.1562 |
|
| 70 |
+
| No log | 18.96 | 304 | 6.0147 |
|
| 71 |
+
| No log | 19.96 | 320 | 5.8919 |
|
| 72 |
+
| No log | 20.96 | 336 | 5.7709 |
|
| 73 |
+
| No log | 21.96 | 352 | 5.6666 |
|
| 74 |
+
| No log | 22.96 | 368 | 5.5818 |
|
| 75 |
+
| No log | 23.96 | 384 | 5.5051 |
|
| 76 |
+
| No log | 24.96 | 400 | 5.4356 |
|
| 77 |
+
| No log | 25.96 | 416 | 5.3788 |
|
| 78 |
+
| No log | 26.96 | 432 | 5.3230 |
|
| 79 |
+
| No log | 27.96 | 448 | 5.2823 |
|
| 80 |
+
| No log | 28.96 | 464 | 5.2513 |
|
| 81 |
+
| No log | 29.96 | 480 | 5.2218 |
|
| 82 |
+
| No log | 30.96 | 496 | 5.1910 |
|
| 83 |
+
| No log | 31.96 | 512 | 5.1609 |
|
| 84 |
+
| No log | 32.96 | 528 | 5.1500 |
|
| 85 |
+
| No log | 33.96 | 544 | 5.1268 |
|
| 86 |
+
| No log | 34.96 | 560 | 5.1012 |
|
| 87 |
+
| No log | 35.96 | 576 | 5.0973 |
|
| 88 |
+
| No log | 36.96 | 592 | 5.0769 |
|
| 89 |
+
| No log | 37.96 | 608 | 5.0653 |
|
| 90 |
+
| No log | 38.96 | 624 | 5.0489 |
|
| 91 |
+
| No log | 39.96 | 640 | 5.0458 |
|
| 92 |
+
| No log | 40.96 | 656 | 5.0379 |
|
| 93 |
+
| No log | 41.96 | 672 | 5.0347 |
|
| 94 |
+
| No log | 42.96 | 688 | 5.0161 |
|
| 95 |
+
| No log | 43.96 | 704 | 5.0226 |
|
| 96 |
+
| No log | 44.96 | 720 | 5.0215 |
|
| 97 |
+
| No log | 45.96 | 736 | 5.0190 |
|
| 98 |
+
| No log | 46.96 | 752 | 5.0087 |
|
| 99 |
+
| No log | 47.96 | 768 | 5.0309 |
|
| 100 |
+
| No log | 48.96 | 784 | 5.0232 |
|
| 101 |
+
| No log | 49.96 | 800 | 5.0319 |
|
| 102 |
|
| 103 |
|
| 104 |
### Framework versions
|
| 105 |
|
| 106 |
+
- Transformers 4.24.0
|
| 107 |
- Pytorch 1.12.1+cu113
|
| 108 |
- Datasets 2.6.1
|
| 109 |
- Tokenizers 0.13.1
|