Model save
Browse files- README.md +58 -20
- all_results.json +6 -6
- train_results.json +6 -6
README.md
CHANGED
|
@@ -5,18 +5,18 @@ base_model: gpt2
|
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
model-index:
|
| 8 |
-
- name:
|
| 9 |
results: []
|
| 10 |
---
|
| 11 |
|
| 12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 13 |
should probably proofread and complete it, then remove this comment. -->
|
| 14 |
|
| 15 |
-
#
|
| 16 |
|
| 17 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
-
- Loss: 0.
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
@@ -36,31 +36,69 @@ More information needed
|
|
| 36 |
|
| 37 |
The following hyperparameters were used during training:
|
| 38 |
- learning_rate: 0.0005
|
| 39 |
-
- train_batch_size:
|
| 40 |
- eval_batch_size: 8
|
| 41 |
- seed: 42
|
| 42 |
-
- gradient_accumulation_steps: 4
|
| 43 |
-
- total_train_batch_size: 16
|
| 44 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 45 |
- lr_scheduler_type: cosine
|
| 46 |
-
- lr_scheduler_warmup_steps:
|
| 47 |
-
- num_epochs:
|
| 48 |
- mixed_precision_training: Native AMP
|
| 49 |
|
| 50 |
### Training results
|
| 51 |
|
| 52 |
-
| Training Loss | Epoch
|
| 53 |
-
|
| 54 |
-
| 0.
|
| 55 |
-
| 0.
|
| 56 |
-
| 0.
|
| 57 |
-
| 0.
|
| 58 |
-
| 0.
|
| 59 |
-
| 0.
|
| 60 |
-
| 0.
|
| 61 |
-
| 0.
|
| 62 |
-
| 0.
|
| 63 |
-
| 0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
|
| 65 |
|
| 66 |
### Framework versions
|
|
|
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
model-index:
|
| 8 |
+
- name: Se124M100KInfPrompt_endtoken
|
| 9 |
results: []
|
| 10 |
---
|
| 11 |
|
| 12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 13 |
should probably proofread and complete it, then remove this comment. -->
|
| 14 |
|
| 15 |
+
# Se124M100KInfPrompt_endtoken
|
| 16 |
|
| 17 |
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
+
- Loss: 0.6695
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
|
|
|
| 36 |
|
| 37 |
The following hyperparameters were used during training:
|
| 38 |
- learning_rate: 0.0005
|
| 39 |
+
- train_batch_size: 8
|
| 40 |
- eval_batch_size: 8
|
| 41 |
- seed: 42
|
|
|
|
|
|
|
| 42 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 43 |
- lr_scheduler_type: cosine
|
| 44 |
+
- lr_scheduler_warmup_steps: 200
|
| 45 |
+
- num_epochs: 50
|
| 46 |
- mixed_precision_training: Native AMP
|
| 47 |
|
| 48 |
### Training results
|
| 49 |
|
| 50 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
| 51 |
+
|:-------------:|:-----:|:------:|:---------------:|
|
| 52 |
+
| 0.7209 | 1.0 | 5717 | 0.7060 |
|
| 53 |
+
| 0.7027 | 2.0 | 11434 | 0.6916 |
|
| 54 |
+
| 0.7005 | 3.0 | 17151 | 0.6865 |
|
| 55 |
+
| 0.7009 | 4.0 | 22868 | 0.6858 |
|
| 56 |
+
| 0.6933 | 5.0 | 28585 | 0.6854 |
|
| 57 |
+
| 0.6922 | 6.0 | 34302 | 0.6825 |
|
| 58 |
+
| 0.6859 | 7.0 | 40019 | 0.6810 |
|
| 59 |
+
| 0.6923 | 8.0 | 45736 | 0.6812 |
|
| 60 |
+
| 0.6919 | 9.0 | 51453 | 0.6809 |
|
| 61 |
+
| 0.6871 | 10.0 | 57170 | 0.6795 |
|
| 62 |
+
| 0.6844 | 11.0 | 62887 | 0.6776 |
|
| 63 |
+
| 0.6923 | 12.0 | 68604 | 0.6780 |
|
| 64 |
+
| 0.6878 | 13.0 | 74321 | 0.6785 |
|
| 65 |
+
| 0.6765 | 14.0 | 80038 | 0.6775 |
|
| 66 |
+
| 0.6864 | 15.0 | 85755 | 0.6769 |
|
| 67 |
+
| 0.6776 | 16.0 | 91472 | 0.6761 |
|
| 68 |
+
| 0.6823 | 17.0 | 97189 | 0.6768 |
|
| 69 |
+
| 0.6743 | 18.0 | 102906 | 0.6751 |
|
| 70 |
+
| 0.682 | 19.0 | 108623 | 0.6776 |
|
| 71 |
+
| 0.6902 | 20.0 | 114340 | 0.6762 |
|
| 72 |
+
| 0.6774 | 21.0 | 120057 | 0.6751 |
|
| 73 |
+
| 0.6748 | 22.0 | 125774 | 0.6747 |
|
| 74 |
+
| 0.6864 | 23.0 | 131491 | 0.6745 |
|
| 75 |
+
| 0.6819 | 24.0 | 137208 | 0.6756 |
|
| 76 |
+
| 0.6818 | 25.0 | 142925 | 0.6745 |
|
| 77 |
+
| 0.6757 | 26.0 | 148642 | 0.6737 |
|
| 78 |
+
| 0.6801 | 27.0 | 154359 | 0.6734 |
|
| 79 |
+
| 0.6717 | 28.0 | 160076 | 0.6724 |
|
| 80 |
+
| 0.6717 | 29.0 | 165793 | 0.6722 |
|
| 81 |
+
| 0.6802 | 30.0 | 171510 | 0.6723 |
|
| 82 |
+
| 0.677 | 31.0 | 177227 | 0.6725 |
|
| 83 |
+
| 0.6764 | 32.0 | 182944 | 0.6712 |
|
| 84 |
+
| 0.6767 | 33.0 | 188661 | 0.6712 |
|
| 85 |
+
| 0.6758 | 34.0 | 194378 | 0.6716 |
|
| 86 |
+
| 0.6772 | 35.0 | 200095 | 0.6715 |
|
| 87 |
+
| 0.679 | 36.0 | 205812 | 0.6717 |
|
| 88 |
+
| 0.6744 | 37.0 | 211529 | 0.6702 |
|
| 89 |
+
| 0.6654 | 38.0 | 217246 | 0.6707 |
|
| 90 |
+
| 0.6723 | 39.0 | 222963 | 0.6704 |
|
| 91 |
+
| 0.6758 | 40.0 | 228680 | 0.6701 |
|
| 92 |
+
| 0.6795 | 41.0 | 234397 | 0.6701 |
|
| 93 |
+
| 0.6681 | 42.0 | 240114 | 0.6698 |
|
| 94 |
+
| 0.6761 | 43.0 | 245831 | 0.6700 |
|
| 95 |
+
| 0.673 | 44.0 | 251548 | 0.6697 |
|
| 96 |
+
| 0.6736 | 45.0 | 257265 | 0.6698 |
|
| 97 |
+
| 0.673 | 46.0 | 262982 | 0.6695 |
|
| 98 |
+
| 0.6686 | 47.0 | 268699 | 0.6695 |
|
| 99 |
+
| 0.666 | 48.0 | 274416 | 0.6696 |
|
| 100 |
+
| 0.663 | 49.0 | 280133 | 0.6695 |
|
| 101 |
+
| 0.6667 | 50.0 | 285850 | 0.6695 |
|
| 102 |
|
| 103 |
|
| 104 |
### Framework versions
|
all_results.json
CHANGED
|
@@ -1,13 +1,13 @@
|
|
| 1 |
{
|
| 2 |
-
"epoch":
|
| 3 |
"eval_loss": 0.6763796806335449,
|
| 4 |
"eval_runtime": 26.5385,
|
| 5 |
"eval_samples_per_second": 368.069,
|
| 6 |
"eval_steps_per_second": 46.009,
|
| 7 |
"perplexity": 1.9667445843771059,
|
| 8 |
-
"total_flos":
|
| 9 |
-
"train_loss": 0.
|
| 10 |
-
"train_runtime":
|
| 11 |
-
"train_samples_per_second":
|
| 12 |
-
"train_steps_per_second":
|
| 13 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"epoch": 50.0,
|
| 3 |
"eval_loss": 0.6763796806335449,
|
| 4 |
"eval_runtime": 26.5385,
|
| 5 |
"eval_samples_per_second": 368.069,
|
| 6 |
"eval_steps_per_second": 46.009,
|
| 7 |
"perplexity": 1.9667445843771059,
|
| 8 |
+
"total_flos": 1.49878932701184e+17,
|
| 9 |
+
"train_loss": 0.6821009000064568,
|
| 10 |
+
"train_runtime": 10227.5107,
|
| 11 |
+
"train_samples_per_second": 223.564,
|
| 12 |
+
"train_steps_per_second": 27.949
|
| 13 |
}
|
train_results.json
CHANGED
|
@@ -1,8 +1,8 @@
|
|
| 1 |
{
|
| 2 |
-
"epoch":
|
| 3 |
-
"total_flos":
|
| 4 |
-
"train_loss": 0.
|
| 5 |
-
"train_runtime":
|
| 6 |
-
"train_samples_per_second":
|
| 7 |
-
"train_steps_per_second":
|
| 8 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"epoch": 50.0,
|
| 3 |
+
"total_flos": 1.49878932701184e+17,
|
| 4 |
+
"train_loss": 0.6821009000064568,
|
| 5 |
+
"train_runtime": 10227.5107,
|
| 6 |
+
"train_samples_per_second": 223.564,
|
| 7 |
+
"train_steps_per_second": 27.949
|
| 8 |
}
|