lapp0 commited on
Commit
f3bc07d
·
verified ·
1 Parent(s): 4400eaf

End of training

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: roneneldan/TinyStories-33M
3
+ library_name: Distily
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: distily_bench_obj_cross_v2.1
8
+ results: []
9
+ ---
10
+
11
+ # distily_bench_obj_cross_v2.1
12
+
13
+ This student model is distilled from the teacher model [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) using the dataset (unspecified).
14
+
15
+ The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
16
+
17
+ It achieves the following results on the evaluation set:
18
+ - eval_enwikippl: 2764.5312
19
+ - eval_frwikippl: 25222.875
20
+ - eval_zhwikippl: 53371.8828
21
+ - eval_tinystoriesppl: 892.0953
22
+ - eval_loss: 4.4843
23
+ - eval_runtime: 6.5829
24
+ - eval_samples_per_second: 75.955
25
+ - eval_steps_per_second: 9.57
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment.
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+ -->
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=0, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
49
+ - train_embeddings: True
50
+ - learning_rate: 0.0004
51
+ - train_batch_size: 8
52
+ - eval_batch_size: 8
53
+ - seed: 42
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: constant_with_warmup
56
+ - num_epochs: 1.0
57
+
58
+ ### Resource Usage
59
+ Peak GPU Memory: 8.0557 GB
60
+
61
+ ### Eval-Phase Metrics
62
+ | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
63
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
+ | **teacher eval** | | 169.9865 | 47377.9414 | | | | | 3.9789 | 4998.1294 |
65
+ | 0 | 0 | 16542.4961 | 47254.6641 | 5.8655 | 6.7367 | 74.221 | 9.352 | 9126.0117 | 61167.5938 |
66
+ | 500 | 0.0808 | 2812.9202 | 25326.1113 | 4.4868 | 6.6023 | 75.731 | 9.542 | 917.2969 | 53815.1641 |
67
+ | 1000 | 0.1616 | 2772.8962 | 25237.0957 | 4.4847 | 6.66 | 75.075 | 9.459 | 899.8722 | 53386.1367 |
68
+ | 1500 | 0.2424 | 2766.2441 | 25222.875 | 4.4847 | 6.6009 | 75.748 | 9.544 | 893.1282 | 53258.0898 |
69
+ | 2000 | 0.3232 | 2764.5312 | 25208.6641 | 4.4847 | 6.571 | 76.091 | 9.588 | 892.3901 | 53258.0898 |
70
+ | 2500 | 0.4040 | 2764.5312 | 25215.7812 | 4.4847 | 6.6023 | 75.732 | 9.542 | 892.0953 | 53258.0898 |
71
+ | 3000 | 0.4848 | 2762.8181 | 25222.875 | 4.4843 | 6.5857 | 75.922 | 9.566 | 891.5054 | 53258.0898 |
72
+ | 3500 | 0.5656 | 2761.5339 | 25215.7812 | 4.4843 | 6.6041 | 75.71 | 9.539 | 889.5178 | 53258.0898 |
73
+ | 4000 | 0.6464 | 2761.5339 | 25215.7812 | 4.4843 | 6.5987 | 75.773 | 9.547 | 889.5178 | 53286.5391 |
74
+ | 4500 | 0.7272 | 2761.5339 | 25215.7812 | 4.4843 | 6.5886 | 75.889 | 9.562 | 889.2234 | 53286.5391 |
75
+ | 5000 | 0.8080 | 2761.5339 | 25215.7812 | 4.4847 | 6.6039 | 75.713 | 9.54 | 889.2234 | 53286.5391 |
76
+ | 5500 | 0.8888 | 2761.5339 | 25215.7812 | 4.4843 | 6.6285 | 75.431 | 9.504 | 889.5178 | 53258.0898 |
77
+ | 6000 | 0.9696 | 2764.5312 | 25222.875 | 4.4837 | 6.6075 | 75.671 | 9.535 | 892.0953 | 53314.9570 |
78
+ | 6188 | 1.0 | 2764.5312 | 25222.875 | 4.4843 | 6.5829 | 75.955 | 9.57 | 892.0953 | 53371.8828 |
79
+
80
+ ### Framework versions
81
+ - Distily 0.2.0
82
+ - Transformers 4.44.0
83
+ - Pytorch 2.3.0
84
+ - Datasets 2.20.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.0"
6
+ }
logs/attn_loss_fn=None, attn_weight=0, gradient_accumulation_steps=1, hs_loss_fn=0, hs_weight=0, learning_rate=0.0004, lr_scheduler_type=constant_with_warmup, max_grad_norm=1.0, num_warmup_steps=0, optim=p/events.out.tfevents.1723838856.93d6cbb3ad53 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1e9d45587624078662de4cd9bfa0f951308722f20ec45317121c6989277c1b1
3
+ size 307