augustocsc commited on
Commit
07ec7f8
·
verified ·
1 Parent(s): 14e0e3d

Model save

Browse files
Files changed (3) hide show
  1. README.md +8 -8
  2. all_results.json +5 -5
  3. train_results.json +5 -5
README.md CHANGED
@@ -5,18 +5,18 @@ base_model: gpt2
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: Se124M500KInfMinimalist
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # Se124M500KInfMinimalist
16
 
17
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5700
20
 
21
  ## Model description
22
 
@@ -36,8 +36,8 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 5e-05
39
- - train_batch_size: 32
40
- - eval_batch_size: 32
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
@@ -48,9 +48,9 @@ The following hyperparameters were used during training:
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:-----:|:---------------:|
51
- | 0.1527 | 1.0 | 7035 | 0.5857 |
52
- | 0.1485 | 2.0 | 14070 | 0.5736 |
53
- | 0.1468 | 3.0 | 21105 | 0.5700 |
54
 
55
 
56
  ### Framework versions
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: Se124M500KInfSimple
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # Se124M500KInfSimple
16
 
17
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.4813
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 5e-05
39
+ - train_batch_size: 24
40
+ - eval_batch_size: 24
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:-----:|:---------------:|
51
+ | 0.1713 | 1.0 | 11089 | 0.4982 |
52
+ | 0.1659 | 2.0 | 22178 | 0.4854 |
53
+ | 0.1676 | 3.0 | 33267 | 0.4813 |
54
 
55
 
56
  ### Framework versions
all_results.json CHANGED
@@ -5,9 +5,9 @@
5
  "eval_samples_per_second": 160.628,
6
  "eval_steps_per_second": 5.022,
7
  "perplexity": 1.7682359469654831,
8
- "total_flos": 4.426734746743603e+16,
9
- "train_loss": 0.15721422936277518,
10
- "train_runtime": 2145.6584,
11
- "train_samples_per_second": 314.741,
12
- "train_steps_per_second": 9.836
13
  }
 
5
  "eval_samples_per_second": 160.628,
6
  "eval_steps_per_second": 5.022,
7
  "perplexity": 1.7682359469654831,
8
+ "total_flos": 5.233249244912026e+16,
9
+ "train_loss": 0.17665502242223255,
10
+ "train_runtime": 3282.9562,
11
+ "train_samples_per_second": 243.185,
12
+ "train_steps_per_second": 10.133
13
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 3.0,
3
- "total_flos": 4.426734746743603e+16,
4
- "train_loss": 0.15721422936277518,
5
- "train_runtime": 2145.6584,
6
- "train_samples_per_second": 314.741,
7
- "train_steps_per_second": 9.836
8
  }
 
1
  {
2
  "epoch": 3.0,
3
+ "total_flos": 5.233249244912026e+16,
4
+ "train_loss": 0.17665502242223255,
5
+ "train_runtime": 3282.9562,
6
+ "train_samples_per_second": 243.185,
7
+ "train_steps_per_second": 10.133
8
  }