| | --- |
| | base_model: gpt2 |
| | datasets: |
| | - wikimedia/wikipedia |
| | library_name: Distily |
| | license: mit |
| | tags: |
| | - bitnet |
| | - 1.58b |
| | - generated_from_trainer |
| | model-index: |
| | - name: verify_v0.3.0 |
| | results: [] |
| | --- |
| | |
| |
|
| | # Summary |
| |
|
| | Distilled with [Distily](https://github.com/lapp0/distily) library |
| | using teacher model [gpt2](https://huggingface.co/gpt2) |
| | on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). |
| |
|
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. |
| |
|
| | # Model description |
| |
|
| | More information needed |
| |
|
| | # Intended uses & limitations |
| |
|
| | More information needed |
| | --> |
| |
|
| | # Model Architecture: |
| | - **Architecture**: `GPT2LMHeadModel` |
| | - **Total Parameters**: 124,439,808 |
| | - **Data Type (dtype)**: torch.bfloat16 |
| | - **Model Size**: 0.24 GB |
| |
|
| |
|
| | # Benchmark Metrics Comparison |
| |
|
| | | Metric | dataset_sample_size=1000 | teacher | |
| | | :--- | :--- | :--- | |
| | | ai2_arc (acc) | 0.225 | 0.304 | |
| | | ai2_arc (acc_norm) | 0.251 | 0.309 | |
| | | ai2_arc (acc_norm_stderr) | | 0.01 | |
| | | ai2_arc (acc_stderr) | | 0.01 | |
| | | arc_challenge (acc) | 0.182 | 0.184 | |
| | | arc_challenge (acc_norm) | 0.223 | 0.214 | |
| | | arc_challenge (acc_norm_stderr) | | 0.013 | |
| | | arc_challenge (acc_stderr) | | 0.012 | |
| | | arc_easy (acc) | 0.268 | 0.424 | |
| | | arc_easy (acc_norm) | 0.278 | 0.405 | |
| | | arc_easy (acc_norm_stderr) | | 0.016 | |
| | | arc_easy (acc_stderr) | | 0.016 | |
| | | boolq (acc) | 0.375 | 0.541 | |
| | | boolq (acc_stderr) | | 0.016 | |
| | | cola (mcc) | 0.0 | 0.009 | |
| | | cola (mcc_stderr) | | 0.032 | |
| | | glue (acc) | 0.477 | 0.41 | |
| | | glue (acc_stderr) | | 0.006 | |
| | | glue (f1) | 0.0 | 0.526 | |
| | | glue (f1_stderr) | | 0.014 | |
| | | glue (mcc) | 0.0 | 0.009 | |
| | | glue (mcc_stderr) | | 0.032 | |
| | | hellaswag (acc) | 0.287 | 0.337 | |
| | | hellaswag (acc_norm) | 0.269 | 0.384 | |
| | | hellaswag (acc_norm_stderr) | | 0.015 | |
| | | hellaswag (acc_stderr) | | 0.015 | |
| | | mnli (acc) | 0.335 | 0.323 | |
| | | mnli (acc_stderr) | | 0.015 | |
| | | mnli_mismatch (acc) | 0.357 | 0.344 | |
| | | mnli_mismatch (acc_stderr) | | 0.015 | |
| | | mrpc (acc) | 0.316 | 0.515 | |
| | | mrpc (acc_stderr) | | 0.025 | |
| | | mrpc (f1) | 0.0 | 0.631 | |
| | | mrpc (f1_stderr) | | 0.024 | |
| | | qnli (acc) | 0.527 | 0.472 | |
| | | qnli (acc_stderr) | | 0.016 | |
| | | qqp (acc) | 0.673 | 0.34 | |
| | | qqp (acc_stderr) | | 0.015 | |
| | | qqp (f1) | 0.0 | 0.483 | |
| | | qqp (f1_stderr) | | 0.017 | |
| | | rte (acc) | 0.527 | 0.516 | |
| | | rte (acc_stderr) | | 0.03 | |
| | | sst2 (acc) | 0.557 | 0.511 | |
| | | sst2 (acc_stderr) | | 0.017 | |
| | | wikitext (bits_per_byte) | 1.979 | | |
| | | wikitext (byte_perplexity) | 3.942 | | |
| | | wikitext (word_perplexity) | 1533.0 | | |
| | | wnli (acc) | 0.437 | 0.451 | |
| | | wnli (acc_stderr) | | 0.059 | |
| | |
| | # Resource Usage Comparison |
| | |
| | - VRAM Use: 7.4923 GB |
| | |
| | # Distillation (Teacher -> Student) Architecture Difference: |
| | |
| | - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` |
| | - **Total Parameters**: 124,439,808 -> 124,439,808 |
| | - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 |
| | - **Model Size**: 0.24 GB -> 0.24 GB |
| | |
| | <details> |
| | <summary>Module Diff Details</summary> |
| | |
| | ```diff |
| | |
| | ``` |
| | |
| | </details> |
| | <br/> |
| | |
| | # Train Dataset |
| | Trained on 923,203 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. |
| | |
| | - Num Samples: `990` |
| | - Subset: `20231101.en` |
| | - Split: `train` |
| | |
| | |
| | # Training Objective |
| | |
| | ``` |
| | DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl)) |
| | ``` |
| | |
| | # Hyperparameters |
| | The following hyperparameters were used during training: |
| | |
| | <details> |
| | <summary>Expand</summary> |
| | |
| | - learning_rate: `0.0001` |
| | - train_batch_size: `4` |
| | - eval_batch_size: `8` |
| | - seed: `42` |
| | - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` |
| | - lr_scheduler_type: `constant` |
| | - lr_scheduler_warmup_ratio: `0.2` |
| | - num_epochs: `1.0` |
| | - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl))` |
| | - train_embeddings: `True` |
| | - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7ff7e81bb7c0>` |
| | - student_model_name_or_path: `None` |
| | - student_config_name_or_path: `None` |
| | - student_model_config: `None` |
| | - reinitialize_weights: `None` |
| | - copy_teacher_modules: `[('lm_head', False)]` |
| | - student_model_as_bitnet: `True` |
| | - student_model_compile: `False` |
| | - dropout: `None` |
| | - teacher_model_name_or_path: `gpt2` |
| | - teacher_load_in_8bit: `False` |
| | - teacher_load_in_4bit: `False` |
| | - teacher_model_compile: `False` |
| | - dataset_uri: `wikimedia/wikipedia` |
| | - dataset_subset: `20231101.en` |
| | - dataset_split: `train` |
| | - dataset_column_name: `text` |
| | - dataset_sample_size: `1000` |
| | - dataset_test_size: `0.01` |
| | - gradient_accumulation_steps: `1` |
| | - weight_decay: `0.0` |
| | - max_grad_norm: `1.0` |
| | - warmup_ratio: `0.2` |
| | - warmup_steps: `0` |
| | - gradient_checkpointing: `True` |
| | |
| | </details> |
| | <br/> |
| | |
| | |
| | # Framework Versions |
| | - Distily 0.3.0 |
| | - Transformers 4.44.2 |
| | - Pytorch 2.3.0 |
| | - Datasets 2.21.0 |
| | |