lapp0 commited on
Commit
3d41e59
·
verified ·
1 Parent(s): 67a7387

End of training

Browse files
README.md CHANGED
@@ -78,13 +78,13 @@ GPT2LMHeadModel(
78
 
79
  # Resource Usage
80
 
81
- - Max Train VRAM Use: 15.7146 GB
82
- - Available VRAM: 23.6429 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
- - CPUs: 32
86
- - CPU Memory: 61.9353 GB
87
- - CPU Memory Bandwidth: 800 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
@@ -115,7 +115,7 @@ GPT2LMHeadModel(
115
  <br/>
116
 
117
  # Train Dataset
118
- Trained on 1,756,071,327 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
 
120
  - Num Samples: `3,996,000`
121
  - Subset: `20231101.en`
@@ -154,7 +154,7 @@ The following hyperparameters were used during training:
154
  - eval_batch_size: `8`
155
  - seed: `42`
156
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
157
- - lr_scheduler_type: `constant_with_warmup`
158
  - num_epochs: `1.0`
159
  - distillation_objective: `DistillationObjective(
160
  logits_loss_component=LossComponent(
@@ -172,14 +172,14 @@ The following hyperparameters were used during training:
172
  projector='orthogonal'
173
  )
174
  )`
175
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fbc60ceb9a0>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
179
  - reinitialize_weights: `None`
180
  - copy_teacher_modules: `[('lm_head', False)]`
181
  - student_model_as_bitnet: `False`
182
- - student_model_use_liger: `True`
183
  - teacher_model_name_or_path: `gpt2`
184
  - teacher_load_in_8bit: `False`
185
  - teacher_load_in_4bit: `False`
@@ -189,7 +189,7 @@ The following hyperparameters were used during training:
189
  - dataset_column_name: `text`
190
  - dataset_sample_size: `4000000`
191
  - dataset_test_size: `0.001`
192
- - dataset_shuffle: `False`
193
  - dataset_shuffle_seed: `42`
194
  - dataset_trust_remote_code: `False`
195
  - gradient_accumulation_steps: `1`
 
78
 
79
  # Resource Usage
80
 
81
+ - Max Train VRAM Use: 15.7123 GB
82
+ - Available VRAM: 23.4329 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
+ - CPUs: 64
86
+ - CPU Memory: 251.7299 GB
87
+ - CPU Memory Bandwidth: 1600 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
 
115
  <br/>
116
 
117
  # Train Dataset
118
+ Trained on 1,811,293,876 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
 
120
  - Num Samples: `3,996,000`
121
  - Subset: `20231101.en`
 
154
  - eval_batch_size: `8`
155
  - seed: `42`
156
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
157
+ - lr_scheduler_type: `polynomial`
158
  - num_epochs: `1.0`
159
  - distillation_objective: `DistillationObjective(
160
  logits_loss_component=LossComponent(
 
172
  projector='orthogonal'
173
  )
174
  )`
175
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7b4c4250be50>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
179
  - reinitialize_weights: `None`
180
  - copy_teacher_modules: `[('lm_head', False)]`
181
  - student_model_as_bitnet: `False`
182
+ - student_model_use_liger: `False`
183
  - teacher_model_name_or_path: `gpt2`
184
  - teacher_load_in_8bit: `False`
185
  - teacher_load_in_4bit: `False`
 
189
  - dataset_column_name: `text`
190
  - dataset_sample_size: `4000000`
191
  - dataset_test_size: `0.001`
192
+ - dataset_shuffle: `True`
193
  - dataset_shuffle_seed: `42`
194
  - dataset_trust_remote_code: `False`
195
  - gradient_accumulation_steps: `1`
logs/learning_rate=0.0002, lr_scheduler_kwargs=__lr_end___0.0001_, lr_scheduler_type=polynomial, warmup_ratio=0/events.out.tfevents.1725885268.b57c76173204 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70376c356ee5e84da705ede837e7dc5d448562e3ce3fb7d7f4bc7b1d01f334e3
3
- size 7849573
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:265b82583edce459cc819e8faba0a26a6a20c42598c8c0812aa68fa32f3f7062
3
+ size 8002593
logs/learning_rate=0.0002, lr_scheduler_kwargs=__lr_end___0.0001_, lr_scheduler_type=polynomial, warmup_ratio=0/events.out.tfevents.1725963114.b57c76173204 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07aca7a7f437cd35a4cd7d172ad1e05587a9a6173582bfa2b4530ce7853f2c59
3
+ size 529
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54e6fc02cbfe08032db3b00688a74746cda4ac218ead8924a02dfb430b828520
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:532afe9746621c9bdeb1ff0c099ddba6bf866c234175fb28c82d4acc3dc14e69
3
  size 163832792