HarshalH commited on
Commit
382c1cb
·
verified ·
1 Parent(s): 2753bfd

HarshalH/OnlySFT

Browse files
README.md CHANGED
@@ -1,13 +1,11 @@
1
  ---
2
- base_model: gpt2
3
- datasets:
4
- - generator
5
- library_name: peft
6
  license: mit
 
7
  tags:
8
- - trl
9
- - sft
10
  - generated_from_trainer
 
 
11
  model-index:
12
  - name: output_dir
13
  results: []
@@ -18,9 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # output_dir
20
 
21
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 2.9638
24
 
25
  ## Model description
26
 
@@ -39,22 +37,26 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - learning_rate: 0.0002
43
  - train_batch_size: 8
44
- - eval_batch_size: 16
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - num_epochs: 1
49
 
50
  ### Training results
51
 
 
 
 
 
 
52
 
53
 
54
  ### Framework versions
55
 
56
- - PEFT 0.13.0
57
  - Transformers 4.44.2
58
  - Pytorch 2.4.0
59
  - Datasets 3.0.0
60
- - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
 
 
 
3
  license: mit
4
+ base_model: gpt2
5
  tags:
 
 
6
  - generated_from_trainer
7
+ datasets:
8
+ - scitldr
9
  model-index:
10
  - name: output_dir
11
  results: []
 
16
 
17
  # output_dir
18
 
19
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the scitldr dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 3.3356
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 2e-05
41
  - train_batch_size: 8
42
+ - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 3.0
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | No log | 1.0 | 249 | 3.3900 |
53
+ | No log | 2.0 | 498 | 3.3463 |
54
+ | 3.5312 | 3.0 | 747 | 3.3356 |
55
 
56
 
57
  ### Framework versions
58
 
 
59
  - Transformers 4.44.2
60
  - Pytorch 2.4.0
61
  - Datasets 3.0.0
62
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b53fa9e18076233fad0fc6190344da6ee63fa358ef59d5967ce3ca253d970ff
3
  size 497774208
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34034ee5f9269246d20da03d683b143174984ad2b730de1ac541543f758c28e0
3
  size 497774208
runs/Oct06_04-06-09_a307efffef62/events.out.tfevents.1728187571.a307efffef62.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b21cc5ed743ddf79bc137011b3ce6cea794d277c6e0934267fb0ec70a410d442
3
+ size 6548
runs/Oct06_04-06-09_a307efffef62/events.out.tfevents.1728187919.a307efffef62.30.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e76f049f22741a99493cfd1f063e8c0c17217f9e9af54c3027c443bebd6e6a4c
3
+ size 359
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:663c45f54fefbaa975ef9a3882ff2b3a4bfaa89a13006e20aad64fb32d2975c3
3
- size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c3ef90b3f4968af27fb202950b04d7dcdd57e8b0133b9e4811a56dce1f3967e
3
+ size 5240