| | --- |
| | base_model: gpt2 |
| | datasets: |
| | - wikimedia/wikipedia |
| | library_name: Distily |
| | license: creativeml-openrail-m |
| | tags: |
| | - generated_from_trainer |
| | - Distily |
| | base_model_relation: finetune |
| | model-index: |
| | - name: distily_performance_tests |
| | results: [] |
| | --- |
| | |
| |
|
| | # Summary |
| |
|
| | Distilled with [Distily](https://github.com/lapp0/distily) library |
| | using teacher model [gpt2](https://huggingface.co/gpt2) |
| | on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). |
| |
|
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. |
| |
|
| | # Model description |
| |
|
| | More information needed |
| |
|
| | # Intended uses & limitations |
| |
|
| | More information needed |
| | --> |
| |
|
| | # Model Architecture: |
| | - **Architecture**: `GPT2LMHeadModel` |
| | - **Total Parameters**: 81,912,576 |
| | - **Data Type (dtype)**: torch.bfloat16 |
| | - **Model Size**: 0.16 GB |
| |
|
| | <details> |
| | <summary>Student Model Details</summary> |
| |
|
| | ``` |
| | GPT2LMHeadModel( |
| | (transformer): GPT2Model( |
| | (wte): Embedding(50257, 768) |
| | (wpe): Embedding(1024, 768) |
| | (drop): Dropout(p=0.1, inplace=False) |
| | (h): ModuleList( |
| | (0-5): 6 x GPT2Block( |
| | (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
| | (attn): GPT2SdpaAttention( |
| | (c_attn): Conv1D() |
| | (c_proj): Conv1D() |
| | (attn_dropout): Dropout(p=0.1, inplace=False) |
| | (resid_dropout): Dropout(p=0.1, inplace=False) |
| | ) |
| | (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
| | (mlp): GPT2MLP( |
| | (c_fc): Conv1D() |
| | (c_proj): Conv1D() |
| | (act): NewGELUActivation() |
| | (dropout): Dropout(p=0.1, inplace=False) |
| | ) |
| | ) |
| | ) |
| | (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
| | ) |
| | (lm_head): Linear(in_features=768, out_features=50257, bias=False) |
| | ) |
| | ``` |
| |
|
| | </details> |
| | <br/> |
| |
|
| |
|
| |
|
| | # Resource Usage |
| |
|
| | - Max Train VRAM Use: 10.4783 GB |
| | - Available VRAM: 23.4329 GB |
| | - GPUs: |
| | - 1x NVIDIA GeForce RTX 4090 |
| | - CPUs: 64 |
| | - CPU Memory: 251.7190 GB |
| | - CPU Memory Bandwidth: 1600 GB/s |
| |
|
| | # Distillation (Teacher -> Student) Architecture Difference: |
| |
|
| | - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` |
| | - **Total Parameters**: 124,439,808 -> 81,912,576 |
| | - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 |
| | - **Model Size**: 0.24 GB -> 0.16 GB |
| |
|
| | <details> |
| | <summary>Module Diff Details</summary> |
| |
|
| | ```diff |
| | --- teacher model modules |
| | +++ student model modules |
| | @@ -4,7 +4,7 @@ |
| | (wpe): Embedding(1024, 768) |
| | (drop): Dropout(p=0.1, inplace=False) |
| | (h): ModuleList( |
| | - (0-11): 12 x GPT2Block( |
| | + (0-5): 6 x GPT2Block( |
| | (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
| | (attn): GPT2SdpaAttention( |
| | (c_attn): Conv1D() |
| | |
| | ``` |
| |
|
| | </details> |
| | <br/> |
| |
|
| | # Train Dataset |
| | Trained on 3,884,521 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. |
| |
|
| | - Num Samples: `4,990` |
| | - Subset: `20231101.en` |
| | - Split: `train` |
| |
|
| |
|
| | # Training Objective |
| |
|
| | ``` |
| | DistillationObjective( |
| | logits_loss_component=LossComponent( |
| | weight=1, |
| | loss_fn='kl' |
| | ), |
| | hs_loss_component=LossComponent( |
| | weight=0 |
| | ), |
| | attn_loss_component=LossComponent( |
| | weight=5.0, |
| | loss_fn='raw_mse', |
| | layer_mapper='layer-2', |
| | norm='layernorm_teacher_only_affine', |
| | projector='orthogonal' |
| | ) |
| | ) |
| | ``` |
| |
|
| | # Hyperparameters |
| | The following hyperparameters were used during training: |
| |
|
| | <details> |
| | <summary>Expand</summary> |
| |
|
| | - learning_rate: `0.0002` |
| | - train_batch_size: `8` |
| | - eval_batch_size: `8` |
| | - seed: `42` |
| | - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` |
| | - lr_scheduler_type: `polynomial` |
| | - num_epochs: `1.0` |
| | - distillation_objective: `DistillationObjective( |
| | logits_loss_component=LossComponent( |
| | weight=1, |
| | loss_fn='kl' |
| | ), |
| | hs_loss_component=LossComponent( |
| | weight=0 |
| | ), |
| | attn_loss_component=LossComponent( |
| | weight=5.0, |
| | loss_fn='raw_mse', |
| | layer_mapper='layer-2', |
| | norm='layernorm_teacher_only_affine', |
| | projector='orthogonal' |
| | ) |
| | )` |
| | - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7efbaca48460>` |
| | - student_model_name_or_path: `None` |
| | - student_config_name_or_path: `distilbert/distilgpt2` |
| | - student_model_config: `None` |
| | - reinitialize_weights: `None` |
| | - copy_teacher_modules: `[('lm_head', False)]` |
| | - student_model_as_bitnet: `False` |
| | - student_model_use_liger: `False` |
| | - teacher_model_name_or_path: `gpt2` |
| | - teacher_load_in_8bit: `False` |
| | - teacher_load_in_4bit: `False` |
| | - dataset_uri: `wikimedia/wikipedia` |
| | - dataset_subset: `20231101.en` |
| | - dataset_split: `train` |
| | - dataset_column_name: `text` |
| | - dataset_sample_size: `5000` |
| | - dataset_test_size: `0.002` |
| | - dataset_shuffle: `False` |
| | - dataset_shuffle_seed: `42` |
| | - dataset_trust_remote_code: `False` |
| | - gradient_accumulation_steps: `1` |
| | - weight_decay: `0.0` |
| | - max_grad_norm: `1.0` |
| | - warmup_ratio: `0.0` |
| | - warmup_steps: `0` |
| | - gradient_checkpointing: `True` |
| | |
| | </details> |
| | <br/> |
| |
|
| |
|
| | # Framework Versions |
| | - Distily 0.5.0 |
| | - Transformers 4.44.2 |
| | - Pytorch 2.4.1+cu121 |
| | - Datasets 3.0.0 |
| |
|