craa commited on
Commit
6167fc6
·
verified ·
1 Parent(s): 93dac8b

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. cost_to_hit_frequency_2128/README.md +131 -0
  2. cost_to_hit_frequency_2128/all_results.json +16 -0
  3. cost_to_hit_frequency_2128/checkpoint-30000/config.json +31 -0
  4. cost_to_hit_frequency_2128/checkpoint-30000/generation_config.json +6 -0
  5. cost_to_hit_frequency_2128/checkpoint-30000/merges.txt +0 -0
  6. cost_to_hit_frequency_2128/checkpoint-30000/model.safetensors +3 -0
  7. cost_to_hit_frequency_2128/checkpoint-30000/optimizer.pt +3 -0
  8. cost_to_hit_frequency_2128/checkpoint-30000/rng_state.pth +3 -0
  9. cost_to_hit_frequency_2128/checkpoint-30000/scaler.pt +3 -0
  10. cost_to_hit_frequency_2128/checkpoint-30000/scheduler.pt +3 -0
  11. cost_to_hit_frequency_2128/checkpoint-30000/special_tokens_map.json +5 -0
  12. cost_to_hit_frequency_2128/checkpoint-30000/tokenizer.json +0 -0
  13. cost_to_hit_frequency_2128/checkpoint-30000/tokenizer_config.json +20 -0
  14. cost_to_hit_frequency_2128/checkpoint-30000/trainer_state.json +0 -0
  15. cost_to_hit_frequency_2128/checkpoint-30000/training_args.bin +3 -0
  16. cost_to_hit_frequency_2128/checkpoint-30000/vocab.json +0 -0
  17. cost_to_hit_frequency_2128/checkpoint-40000/config.json +31 -0
  18. cost_to_hit_frequency_2128/checkpoint-40000/generation_config.json +6 -0
  19. cost_to_hit_frequency_2128/checkpoint-40000/merges.txt +0 -0
  20. cost_to_hit_frequency_2128/checkpoint-40000/model.safetensors +3 -0
  21. cost_to_hit_frequency_2128/checkpoint-40000/optimizer.pt +3 -0
  22. cost_to_hit_frequency_2128/checkpoint-40000/rng_state.pth +3 -0
  23. cost_to_hit_frequency_2128/checkpoint-40000/scaler.pt +3 -0
  24. cost_to_hit_frequency_2128/checkpoint-40000/scheduler.pt +3 -0
  25. cost_to_hit_frequency_2128/checkpoint-40000/special_tokens_map.json +5 -0
  26. cost_to_hit_frequency_2128/checkpoint-40000/tokenizer.json +0 -0
  27. cost_to_hit_frequency_2128/checkpoint-40000/tokenizer_config.json +20 -0
  28. cost_to_hit_frequency_2128/checkpoint-40000/trainer_state.json +0 -0
  29. cost_to_hit_frequency_2128/checkpoint-40000/training_args.bin +3 -0
  30. cost_to_hit_frequency_2128/checkpoint-40000/vocab.json +0 -0
  31. cost_to_hit_frequency_2128/checkpoint-50000/config.json +31 -0
  32. cost_to_hit_frequency_2128/checkpoint-50000/generation_config.json +6 -0
  33. cost_to_hit_frequency_2128/checkpoint-50000/merges.txt +0 -0
  34. cost_to_hit_frequency_2128/checkpoint-50000/model.safetensors +3 -0
  35. cost_to_hit_frequency_2128/checkpoint-50000/optimizer.pt +3 -0
  36. cost_to_hit_frequency_2128/checkpoint-50000/rng_state.pth +3 -0
  37. cost_to_hit_frequency_2128/checkpoint-50000/scaler.pt +3 -0
  38. cost_to_hit_frequency_2128/checkpoint-50000/scheduler.pt +3 -0
  39. cost_to_hit_frequency_2128/checkpoint-50000/special_tokens_map.json +5 -0
  40. cost_to_hit_frequency_2128/checkpoint-50000/tokenizer.json +0 -0
  41. cost_to_hit_frequency_2128/checkpoint-50000/tokenizer_config.json +20 -0
  42. cost_to_hit_frequency_2128/checkpoint-50000/trainer_state.json +0 -0
  43. cost_to_hit_frequency_2128/checkpoint-50000/training_args.bin +3 -0
  44. cost_to_hit_frequency_2128/checkpoint-50000/vocab.json +0 -0
  45. cost_to_hit_frequency_2128/checkpoint-60000/config.json +31 -0
  46. cost_to_hit_frequency_2128/checkpoint-60000/generation_config.json +6 -0
  47. cost_to_hit_frequency_2128/checkpoint-60000/merges.txt +0 -0
  48. cost_to_hit_frequency_2128/checkpoint-60000/model.safetensors +3 -0
  49. cost_to_hit_frequency_2128/checkpoint-60000/optimizer.pt +3 -0
  50. cost_to_hit_frequency_2128/checkpoint-60000/rng_state.pth +3 -0
cost_to_hit_frequency_2128/README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: exceptions_exp2_cost_to_hit_frequency_2128
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/craaaa/exceptions_exp2/runs/bzy2btrz)
16
+ # exceptions_exp2_cost_to_hit_frequency_2128
17
+
18
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 3.5464
21
+ - Accuracy: 0.3716
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0006
41
+ - train_batch_size: 16
42
+ - eval_batch_size: 16
43
+ - seed: 2128
44
+ - gradient_accumulation_steps: 5
45
+ - total_train_batch_size: 80
46
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 100
49
+ - num_epochs: 20.0
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-------:|:-----:|:---------------:|:--------:|
56
+ | 4.836 | 0.2913 | 1000 | 4.7542 | 0.2547 |
57
+ | 4.3423 | 0.5826 | 2000 | 4.2851 | 0.2993 |
58
+ | 4.1418 | 0.8739 | 3000 | 4.0991 | 0.3155 |
59
+ | 3.991 | 1.1652 | 4000 | 3.9937 | 0.3253 |
60
+ | 3.9297 | 1.4565 | 5000 | 3.9147 | 0.3317 |
61
+ | 3.8779 | 1.7478 | 6000 | 3.8544 | 0.3371 |
62
+ | 3.7427 | 2.0390 | 7000 | 3.8124 | 0.3416 |
63
+ | 3.7492 | 2.3303 | 8000 | 3.7812 | 0.3447 |
64
+ | 3.7372 | 2.6216 | 9000 | 3.7514 | 0.3474 |
65
+ | 3.7056 | 2.9130 | 10000 | 3.7261 | 0.3498 |
66
+ | 3.6285 | 3.2042 | 11000 | 3.7096 | 0.3520 |
67
+ | 3.638 | 3.4955 | 12000 | 3.6934 | 0.3540 |
68
+ | 3.616 | 3.7868 | 13000 | 3.6741 | 0.3555 |
69
+ | 3.5277 | 4.0781 | 14000 | 3.6672 | 0.3568 |
70
+ | 3.5578 | 4.3694 | 15000 | 3.6568 | 0.3576 |
71
+ | 3.5631 | 4.6607 | 16000 | 3.6425 | 0.3594 |
72
+ | 3.5746 | 4.9520 | 17000 | 3.6273 | 0.3607 |
73
+ | 3.4835 | 5.2432 | 18000 | 3.6301 | 0.3612 |
74
+ | 3.5199 | 5.5345 | 19000 | 3.6201 | 0.3620 |
75
+ | 3.5176 | 5.8259 | 20000 | 3.6096 | 0.3629 |
76
+ | 3.4233 | 6.1171 | 21000 | 3.6123 | 0.3633 |
77
+ | 3.4484 | 6.4084 | 22000 | 3.6026 | 0.3641 |
78
+ | 3.4673 | 6.6997 | 23000 | 3.5936 | 0.3648 |
79
+ | 3.4728 | 6.9910 | 24000 | 3.5832 | 0.3659 |
80
+ | 3.4081 | 7.2823 | 25000 | 3.5913 | 0.3657 |
81
+ | 3.4258 | 7.5736 | 26000 | 3.5828 | 0.3662 |
82
+ | 3.4357 | 7.8649 | 27000 | 3.5754 | 0.3675 |
83
+ | 3.3437 | 8.1561 | 28000 | 3.5826 | 0.3672 |
84
+ | 3.3846 | 8.4474 | 29000 | 3.5741 | 0.3677 |
85
+ | 3.397 | 8.7388 | 30000 | 3.5654 | 0.3687 |
86
+ | 3.2933 | 9.0300 | 31000 | 3.5706 | 0.3684 |
87
+ | 3.3448 | 9.3213 | 32000 | 3.5704 | 0.3688 |
88
+ | 3.362 | 9.6126 | 33000 | 3.5591 | 0.3695 |
89
+ | 3.3664 | 9.9039 | 34000 | 3.5527 | 0.3701 |
90
+ | 3.3097 | 10.1952 | 35000 | 3.5632 | 0.3696 |
91
+ | 3.3296 | 10.4865 | 36000 | 3.5596 | 0.3700 |
92
+ | 3.3485 | 10.7778 | 37000 | 3.5511 | 0.3708 |
93
+ | 3.2602 | 11.0690 | 38000 | 3.5577 | 0.3708 |
94
+ | 3.2927 | 11.3603 | 39000 | 3.5537 | 0.3709 |
95
+ | 3.2965 | 11.6517 | 40000 | 3.5464 | 0.3716 |
96
+ | 3.3093 | 11.9430 | 41000 | 3.5401 | 0.3719 |
97
+ | 3.2578 | 12.2342 | 42000 | 3.5509 | 0.3716 |
98
+ | 3.2759 | 12.5255 | 43000 | 3.5461 | 0.3721 |
99
+ | 3.2977 | 12.8168 | 44000 | 3.5381 | 0.3729 |
100
+ | 3.2116 | 13.1081 | 45000 | 3.5465 | 0.3725 |
101
+ | 3.2472 | 13.3994 | 46000 | 3.5436 | 0.3728 |
102
+ | 3.2546 | 13.6907 | 47000 | 3.5366 | 0.3733 |
103
+ | 3.2669 | 13.9820 | 48000 | 3.5320 | 0.3736 |
104
+ | 3.2044 | 14.2732 | 49000 | 3.5419 | 0.3734 |
105
+ | 3.2363 | 14.5646 | 50000 | 3.5365 | 0.3739 |
106
+ | 3.2326 | 14.8559 | 51000 | 3.5300 | 0.3743 |
107
+ | 3.1742 | 15.1471 | 52000 | 3.5394 | 0.3739 |
108
+ | 3.2021 | 15.4384 | 53000 | 3.5332 | 0.3746 |
109
+ | 3.2011 | 15.7297 | 54000 | 3.5290 | 0.3747 |
110
+ | 3.1483 | 16.0210 | 55000 | 3.5327 | 0.3748 |
111
+ | 3.16 | 16.3123 | 56000 | 3.5347 | 0.3746 |
112
+ | 3.1867 | 16.6036 | 57000 | 3.5271 | 0.3752 |
113
+ | 3.1814 | 16.8949 | 58000 | 3.5252 | 0.3755 |
114
+ | 3.1394 | 17.1861 | 59000 | 3.5330 | 0.3753 |
115
+ | 3.1498 | 17.4775 | 60000 | 3.5306 | 0.3755 |
116
+ | 3.1687 | 17.7688 | 61000 | 3.5236 | 0.3760 |
117
+ | 3.1237 | 18.0600 | 62000 | 3.5266 | 0.3760 |
118
+ | 3.1318 | 18.3513 | 63000 | 3.5264 | 0.3759 |
119
+ | 3.1343 | 18.6426 | 64000 | 3.5249 | 0.3764 |
120
+ | 3.1352 | 18.9339 | 65000 | 3.5207 | 0.3766 |
121
+ | 3.1114 | 19.2252 | 66000 | 3.5234 | 0.3765 |
122
+ | 3.129 | 19.5165 | 67000 | 3.5234 | 0.3766 |
123
+ | 3.1103 | 19.8078 | 68000 | 3.5216 | 0.3767 |
124
+
125
+
126
+ ### Framework versions
127
+
128
+ - Transformers 4.55.2
129
+ - Pytorch 2.8.0+cu128
130
+ - Datasets 4.0.0
131
+ - Tokenizers 0.21.4
cost_to_hit_frequency_2128/all_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.0,
3
+ "eval_accuracy": 0.3716299568587447,
4
+ "eval_loss": 3.5464437007904053,
5
+ "eval_runtime": 179.9004,
6
+ "eval_samples": 16644,
7
+ "eval_samples_per_second": 92.518,
8
+ "eval_steps_per_second": 5.787,
9
+ "perplexity": 34.689730800009954,
10
+ "total_flos": 1.43513603407872e+18,
11
+ "train_loss": 3.438410573848788,
12
+ "train_runtime": 137369.0027,
13
+ "train_samples": 274623,
14
+ "train_samples_per_second": 39.983,
15
+ "train_steps_per_second": 0.5
16
+ }
cost_to_hit_frequency_2128/checkpoint-30000/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_embd": 768,
14
+ "n_head": 12,
15
+ "n_inner": null,
16
+ "n_layer": 12,
17
+ "n_positions": 1024,
18
+ "reorder_and_upcast_attn": false,
19
+ "resid_pdrop": 0.1,
20
+ "scale_attn_by_inverse_layer_idx": false,
21
+ "scale_attn_weights": true,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.55.2",
29
+ "use_cache": true,
30
+ "vocab_size": 50257
31
+ }
cost_to_hit_frequency_2128/checkpoint-30000/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.55.2"
6
+ }
cost_to_hit_frequency_2128/checkpoint-30000/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-30000/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15eff0546433eae5fcecd7242796be8b625c9cbfcb550107310407e993979a2f
3
+ size 497774208
cost_to_hit_frequency_2128/checkpoint-30000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f00a95e2440639bc5b56de4da1e6fc8d240080959df3872b9f78f09b4caf9a22
3
+ size 995644811
cost_to_hit_frequency_2128/checkpoint-30000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a83888c246e208f0fbb7f99b6a5848c0701bd3de35fcbddacf2fb163dbed1af
3
+ size 14773
cost_to_hit_frequency_2128/checkpoint-30000/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff4341875e519c94e18fcbb09e5679daac8d5ceef9652233684801e8641eff08
3
+ size 1383
cost_to_hit_frequency_2128/checkpoint-30000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:301efb1017ab1ce751d8060f27c0667481947139f2c5d378534548d2f869629f
3
+ size 1465
cost_to_hit_frequency_2128/checkpoint-30000/special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
cost_to_hit_frequency_2128/checkpoint-30000/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-30000/tokenizer_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "50256": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ }
12
+ },
13
+ "bos_token": "<|endoftext|>",
14
+ "clean_up_tokenization_spaces": false,
15
+ "eos_token": "<|endoftext|>",
16
+ "extra_special_tokens": {},
17
+ "model_max_length": 1024,
18
+ "tokenizer_class": "GPT2Tokenizer",
19
+ "unk_token": "<|endoftext|>"
20
+ }
cost_to_hit_frequency_2128/checkpoint-30000/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-30000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f748c64364a58a4f3b9ecaa8ad6e0708fdb6f6064c4fe9f0a8b760e4cf2ae3
3
+ size 5969
cost_to_hit_frequency_2128/checkpoint-30000/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-40000/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_embd": 768,
14
+ "n_head": 12,
15
+ "n_inner": null,
16
+ "n_layer": 12,
17
+ "n_positions": 1024,
18
+ "reorder_and_upcast_attn": false,
19
+ "resid_pdrop": 0.1,
20
+ "scale_attn_by_inverse_layer_idx": false,
21
+ "scale_attn_weights": true,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.55.2",
29
+ "use_cache": true,
30
+ "vocab_size": 50257
31
+ }
cost_to_hit_frequency_2128/checkpoint-40000/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.55.2"
6
+ }
cost_to_hit_frequency_2128/checkpoint-40000/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-40000/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0d2c282e441baa606ce9b816e263406f81ea5beb99df861dd82d8fd7ad78d31
3
+ size 497774208
cost_to_hit_frequency_2128/checkpoint-40000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d94309c30fe3a3ffdb5874e7447a44fbf756a1e4a8c32502c7dfb5296f823b33
3
+ size 995644811
cost_to_hit_frequency_2128/checkpoint-40000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9f11a1a44e39d6aa8689843ed1a17279bc49b7fc67e589b5bc9e78e2ea5257c
3
+ size 14773
cost_to_hit_frequency_2128/checkpoint-40000/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d30e0d467a783bf8ad251da21fdf6502bfe87d6923cc16f79e2feb040e6506bd
3
+ size 1383
cost_to_hit_frequency_2128/checkpoint-40000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6859e009e74a90c701fc291ca3ce2c55b6127473c95e8cad74136c092fc90eb6
3
+ size 1465
cost_to_hit_frequency_2128/checkpoint-40000/special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
cost_to_hit_frequency_2128/checkpoint-40000/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-40000/tokenizer_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "50256": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ }
12
+ },
13
+ "bos_token": "<|endoftext|>",
14
+ "clean_up_tokenization_spaces": false,
15
+ "eos_token": "<|endoftext|>",
16
+ "extra_special_tokens": {},
17
+ "model_max_length": 1024,
18
+ "tokenizer_class": "GPT2Tokenizer",
19
+ "unk_token": "<|endoftext|>"
20
+ }
cost_to_hit_frequency_2128/checkpoint-40000/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-40000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f748c64364a58a4f3b9ecaa8ad6e0708fdb6f6064c4fe9f0a8b760e4cf2ae3
3
+ size 5969
cost_to_hit_frequency_2128/checkpoint-40000/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-50000/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_embd": 768,
14
+ "n_head": 12,
15
+ "n_inner": null,
16
+ "n_layer": 12,
17
+ "n_positions": 1024,
18
+ "reorder_and_upcast_attn": false,
19
+ "resid_pdrop": 0.1,
20
+ "scale_attn_by_inverse_layer_idx": false,
21
+ "scale_attn_weights": true,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.55.2",
29
+ "use_cache": true,
30
+ "vocab_size": 50257
31
+ }
cost_to_hit_frequency_2128/checkpoint-50000/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.55.2"
6
+ }
cost_to_hit_frequency_2128/checkpoint-50000/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-50000/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cc6b21ff94c5771afdc6762ad1714ffb967e85cc0cfb716dc0ec98014ced8e6
3
+ size 497774208
cost_to_hit_frequency_2128/checkpoint-50000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf5261017e0b0ee0d23f9b8d2852fbc3b91b7f445bc4c356e07fe629d2887ba5
3
+ size 995644811
cost_to_hit_frequency_2128/checkpoint-50000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5350ffff11d9c4ffcbf017f8fc0fc14f02b6c821167264f309a8779b3a5d988
3
+ size 14773
cost_to_hit_frequency_2128/checkpoint-50000/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cf09822b0a9b942f1a93d43c4773d91eb034b6672175d8595eb94616dba5a4f
3
+ size 1383
cost_to_hit_frequency_2128/checkpoint-50000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc8fc0c2226e9ab2cff6efec2bea9da1344a082a5b304e22da1fda36060e5c6b
3
+ size 1465
cost_to_hit_frequency_2128/checkpoint-50000/special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
cost_to_hit_frequency_2128/checkpoint-50000/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-50000/tokenizer_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "50256": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ }
12
+ },
13
+ "bos_token": "<|endoftext|>",
14
+ "clean_up_tokenization_spaces": false,
15
+ "eos_token": "<|endoftext|>",
16
+ "extra_special_tokens": {},
17
+ "model_max_length": 1024,
18
+ "tokenizer_class": "GPT2Tokenizer",
19
+ "unk_token": "<|endoftext|>"
20
+ }
cost_to_hit_frequency_2128/checkpoint-50000/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-50000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f748c64364a58a4f3b9ecaa8ad6e0708fdb6f6064c4fe9f0a8b760e4cf2ae3
3
+ size 5969
cost_to_hit_frequency_2128/checkpoint-50000/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-60000/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_embd": 768,
14
+ "n_head": 12,
15
+ "n_inner": null,
16
+ "n_layer": 12,
17
+ "n_positions": 1024,
18
+ "reorder_and_upcast_attn": false,
19
+ "resid_pdrop": 0.1,
20
+ "scale_attn_by_inverse_layer_idx": false,
21
+ "scale_attn_weights": true,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.55.2",
29
+ "use_cache": true,
30
+ "vocab_size": 50257
31
+ }
cost_to_hit_frequency_2128/checkpoint-60000/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.55.2"
6
+ }
cost_to_hit_frequency_2128/checkpoint-60000/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
cost_to_hit_frequency_2128/checkpoint-60000/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d39645cb26ea6c32a4aaff1d85d2197fe6ddea03e4c20cd7d9336c9de601dd01
3
+ size 497774208
cost_to_hit_frequency_2128/checkpoint-60000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cba43977e514d291319f405d4097896713fb00aaec0f625cbe612fe828665524
3
+ size 995644811
cost_to_hit_frequency_2128/checkpoint-60000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3f6ebf74d0e1c8dbf5b9ac38f7c0bedc178b80ade6367939034c133da54acc7
3
+ size 14773