d2j666 commited on
Commit
62d0b4e
·
1 Parent(s): c9c7ed9

Upload model

Browse files
Files changed (3) hide show
  1. README.md +9 -7
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -14,9 +14,9 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 9.1595
18
- - Validation Loss: 8.9603
19
- - Epoch: 2
20
 
21
  ## Model description
22
 
@@ -35,16 +35,18 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -985, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: float32
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 9.6041 | 9.5385 | 0 |
46
- | 9.4427 | 9.2623 | 1 |
47
- | 9.1595 | 8.9603 | 2 |
 
 
48
 
49
 
50
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 8.7799
18
+ - Validation Loss: 8.6826
19
+ - Epoch: 4
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -987, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: float32
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 9.5975 | 9.5488 | 0 |
46
+ | 9.4736 | 9.3392 | 1 |
47
+ | 9.2465 | 9.0821 | 2 |
48
+ | 8.9970 | 8.8573 | 3 |
49
+ | 8.7799 | 8.6826 | 4 |
50
 
51
 
52
  ### Framework versions
config.json CHANGED
@@ -11,7 +11,7 @@
11
  "initializer_range": 0.02,
12
  "layer_norm_epsilon": 1e-05,
13
  "model_type": "gpt2",
14
- "n_ctx": 60,
15
  "n_embd": 768,
16
  "n_head": 12,
17
  "n_inner": null,
 
11
  "initializer_range": 0.02,
12
  "layer_norm_epsilon": 1e-05,
13
  "model_type": "gpt2",
14
+ "n_ctx": 128,
15
  "n_embd": 768,
16
  "n_head": 12,
17
  "n_inner": null,
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:331aaf957039547b2cda484476edfa81ea1eb9faa9bfca61a3116a4c7088e689
3
  size 385107024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6efd01ce8c035e17d8ad90a04dad10a42274c144352a4fb477eb09d49264f739
3
  size 385107024