craa commited on
Commit
e196767
·
verified ·
1 Parent(s): 54b5f15

Training in progress, step 20000

Browse files
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - craa/100M
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: 100M_1208
11
+ results:
12
+ - task:
13
+ name: Causal Language Modeling
14
+ type: text-generation
15
+ dataset:
16
+ name: craa/100M
17
+ type: craa/100M
18
+ metrics:
19
+ - name: Accuracy
20
+ type: accuracy
21
+ value: 0.396690747155734
22
+ ---
23
+
24
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
25
+ should probably proofread and complete it, then remove this comment. -->
26
+
27
+ # 100M_1208
28
+
29
+ This model is a fine-tuned version of [](https://huggingface.co/) on the craa/100M dataset.
30
+ It achieves the following results on the evaluation set:
31
+ - Loss: 3.2743
32
+ - Accuracy: 0.3967
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+ - learning_rate: 0.0006
52
+ - train_batch_size: 32
53
+ - eval_batch_size: 16
54
+ - seed: 1208
55
+ - gradient_accumulation_steps: 5
56
+ - total_train_batch_size: 160
57
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
58
+ - lr_scheduler_type: linear
59
+ - lr_scheduler_warmup_steps: 100
60
+ - num_epochs: 20.0
61
+ - mixed_precision_training: Native AMP
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
+ |:-------------:|:-------:|:-----:|:---------------:|:--------:|
67
+ | 21.4151 | 0.5391 | 1000 | 4.1905 | 0.3008 |
68
+ | 19.0701 | 1.0782 | 2000 | 3.7933 | 0.3383 |
69
+ | 18.4085 | 1.6173 | 3000 | 3.6443 | 0.3525 |
70
+ | 17.6787 | 2.1563 | 4000 | 3.5533 | 0.3620 |
71
+ | 17.4852 | 2.6954 | 5000 | 3.4951 | 0.3677 |
72
+ | 16.9959 | 3.2345 | 6000 | 3.4563 | 0.3724 |
73
+ | 16.9895 | 3.7736 | 7000 | 3.4181 | 0.3761 |
74
+ | 16.5971 | 4.3127 | 8000 | 3.3983 | 0.3787 |
75
+ | 16.629 | 4.8518 | 9000 | 3.3727 | 0.3811 |
76
+ | 16.3035 | 5.3908 | 10000 | 3.3641 | 0.3828 |
77
+ | 16.3522 | 5.9299 | 11000 | 3.3433 | 0.3847 |
78
+ | 16.0895 | 6.4690 | 12000 | 3.3392 | 0.3861 |
79
+ | 16.018 | 7.0081 | 13000 | 3.3281 | 0.3871 |
80
+ | 15.9016 | 7.5472 | 14000 | 3.3211 | 0.3884 |
81
+ | 15.5317 | 8.0863 | 15000 | 3.3181 | 0.3892 |
82
+ | 15.7553 | 8.6253 | 16000 | 3.3057 | 0.3900 |
83
+ | 15.4523 | 9.1644 | 17000 | 3.3084 | 0.3906 |
84
+ | 15.6218 | 9.7035 | 18000 | 3.2949 | 0.3917 |
85
+ | 15.3394 | 10.2426 | 19000 | 3.2980 | 0.3922 |
86
+ | 15.4862 | 10.7817 | 20000 | 3.2827 | 0.3931 |
87
+ | 15.2552 | 11.3208 | 21000 | 3.2885 | 0.3931 |
88
+ | 15.3326 | 11.8598 | 22000 | 3.2771 | 0.3942 |
89
+ | 15.2132 | 12.3989 | 23000 | 3.2817 | 0.3941 |
90
+ | 15.2593 | 12.9380 | 24000 | 3.2723 | 0.3952 |
91
+ | 15.1103 | 13.4771 | 25000 | 3.2763 | 0.3952 |
92
+ | 14.9525 | 14.0162 | 26000 | 3.2771 | 0.3955 |
93
+ | 14.987 | 14.5553 | 27000 | 3.2730 | 0.3958 |
94
+ | 14.7762 | 15.0943 | 28000 | 3.2754 | 0.3959 |
95
+ | 14.9004 | 15.6334 | 29000 | 3.2706 | 0.3966 |
96
+ | 14.7148 | 16.1725 | 30000 | 3.2743 | 0.3967 |
97
+ | 14.8087 | 16.7116 | 31000 | 3.2671 | 0.3973 |
98
+ | 14.6624 | 17.2507 | 32000 | 3.2711 | 0.3973 |
99
+ | 14.7309 | 17.7898 | 33000 | 3.2652 | 0.3976 |
100
+ | 14.5986 | 18.3288 | 34000 | 3.2671 | 0.3977 |
101
+ | 14.6162 | 18.8679 | 35000 | 3.2635 | 0.3982 |
102
+ | 14.5047 | 19.4070 | 36000 | 3.2659 | 0.3981 |
103
+ | 14.4926 | 19.9461 | 37000 | 3.2638 | 0.3984 |
104
+
105
+
106
+ ### Framework versions
107
+
108
+ - Transformers 4.47.0.dev0
109
+ - Pytorch 2.5.0+cu124
110
+ - Datasets 3.0.2
111
+ - Tokenizers 0.20.1
all_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.0,
3
+ "eval_accuracy": 0.396690747155734,
4
+ "eval_loss": 3.2743115425109863,
5
+ "eval_runtime": 180.7243,
6
+ "eval_samples": 18012,
7
+ "eval_samples_per_second": 99.666,
8
+ "eval_steps_per_second": 6.23,
9
+ "perplexity": 26.425026709206048,
10
+ "total_flos": 1.55087795257344e+18,
11
+ "train_loss": 16.10652202595919,
12
+ "train_runtime": 128521.0074,
13
+ "train_samples": 296771,
14
+ "train_samples_per_second": 46.182,
15
+ "train_steps_per_second": 0.289
16
+ }
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_embd": 768,
14
+ "n_head": 12,
15
+ "n_inner": null,
16
+ "n_layer": 12,
17
+ "n_positions": 1024,
18
+ "reorder_and_upcast_attn": false,
19
+ "resid_pdrop": 0.1,
20
+ "scale_attn_by_inverse_layer_idx": false,
21
+ "scale_attn_weights": true,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.47.0.dev0",
29
+ "use_cache": true,
30
+ "vocab_size": 52000
31
+ }
eval_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.0,
3
+ "eval_accuracy": 0.396690747155734,
4
+ "eval_loss": 3.2743115425109863,
5
+ "eval_runtime": 180.7243,
6
+ "eval_samples": 18012,
7
+ "eval_samples_per_second": 99.666,
8
+ "eval_steps_per_second": 6.23,
9
+ "perplexity": 26.425026709206048
10
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.47.0.dev0"
6
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9122d6804ba9f6b16fbabcd91513f0b7722feb3243a3da6217007b8bb6e23261
3
+ size 503128704
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.0,
3
+ "total_flos": 1.55087795257344e+18,
4
+ "train_loss": 16.10652202595919,
5
+ "train_runtime": 128521.0074,
6
+ "train_samples": 296771,
7
+ "train_samples_per_second": 46.182,
8
+ "train_steps_per_second": 0.289
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d06f4686862ac63923246cbe2f6d549ac0e43a2b8d4f78e1b061d7db21b96d5
3
+ size 5304