BurnyCoder commited on
Commit
c0e955f
·
verified ·
1 Parent(s): 41533c2

Initial upload of EsperBERTo model

Browse files
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: eo
3
+ license: mit
4
+ ---
5
+
6
+ # EsperBERTo: A RoBERTa-like model for Esperanto
7
+
8
+ This is a RoBERTa-like model trained from scratch on the Esperanto language.
9
+
10
+ ## Model description
11
+
12
+ The model has 6 layers, 768 hidden size, 12 attention heads, and a total of 84 million parameters. It's based on the RoBERTa architecture. The tokenizer is a byte-level Byte-Pair Encoding (BPE) tokenizer trained from scratch on the same Esperanto corpus.
13
+
14
+ - **Model:** RoBERTa-like
15
+ - **Layers:** 6
16
+ - **Hidden size:** 768
17
+ - **Heads:** 12
18
+ - **Parameters:** 84M
19
+ - **Tokenizer:** Byte-level BPE
20
+ - **Vocabulary size:** 52,000
21
+
22
+ ## Training data
23
+
24
+ The model was trained on the Esperanto portion of the OSCAR corpus (`oscar.eo.txt`), which is approximately 3GB in size.
25
+
26
+ ## Training procedure
27
+
28
+ The model was trained for one epoch on the OSCAR corpus using the `Trainer` API from the `transformers` library. The training was performed on a single GPU.
29
+
30
+ ### Hyperparameters
31
+ - `output_dir`: "./EsperBERTo"
32
+ - `overwrite_output_dir`: `True`
33
+ - `num_train_epochs`: 1
34
+ - `per_gpu_train_batch_size`: 64
35
+ - `save_steps`: 10_000
36
+ - `save_total_limit`: 2
37
+ - `prediction_loss_only`: `True`
38
+
39
+ The final training loss was `6.1178`.
40
+
41
+ ## Evaluation results
42
+
43
+ The model was not evaluated on a downstream task in the notebook. However, its capabilities can be tested using the `fill-mask` pipeline.
44
+
45
+ Example 1:
46
+ ```python
47
+ from transformers import pipeline
48
+
49
+ fill_mask = pipeline(
50
+ "fill-mask",
51
+ model="./EsperBERTo",
52
+ tokenizer="./EsperBERTo"
53
+ )
54
+
55
+ fill_mask("La suno <mask>.")
56
+ ```
57
+ Output:
58
+ ```
59
+ [{'score': 0.013023526407778263, 'token': 316, 'token_str': ' estas', 'sequence': 'La suno estas.'},
60
+ {'score': 0.008523152209818363, 'token': 607, 'token_str': ' min', 'sequence': 'La suno min.'},
61
+ {'score': 0.007405377924442291, 'token': 2575, 'token_str': ' okuloj', 'sequence': 'La suno okuloj.'},
62
+ {'score': 0.007219308987259865, 'token': 1635, 'token_str': ' tago', 'sequence': 'La suno tago.'},
63
+ {'score': 0.006888304837048054, 'token': 394, 'token_str': ' estis', 'sequence': 'La suno estis.'}]
64
+ ```
65
+
66
+ Example 2:
67
+ ```python
68
+ fill_mask("Jen la komenco de bela <mask>.")
69
+ ```
70
+ Output:
71
+ ```
72
+ [{'score': 0.016247423365712166, 'token': 1635, 'token_str': ' tago', 'sequence': 'Jen la komenco de bela tago.'},
73
+ {'score': 0.009718689136207104, 'token': 1021, 'token_str': ' tempo', 'sequence': 'Jen la komenco de bela tempo.'},
74
+ {'score': 0.007543196901679039, 'token': 2257, 'token_str': ' kongreso', 'sequence': 'Jen la komenco de bela kongreso.'},
75
+ {'score': 0.0071307034231722355, 'token': 1161, 'token_str': ' vivo', 'sequence': 'Jen la komenco de bela vivo.'},
76
+ {'score': 0.006644904613494873, 'token': 758, 'token_str': ' jaroj', 'sequence': 'Jen la komenco de bela jaroj.'}]
77
+ ```
78
+
79
+ ## Intended uses & limitations
80
+
81
+ This model is intended to be a general-purpose language model for Esperanto. It can be used for masked language modeling and can be fine-tuned for various downstream tasks such as:
82
+ - Text Classification
83
+ - Token Classification (Part-of-Speech Tagging, Named Entity Recognition)
84
+ - Question Answering
85
+
86
+ Since the model was trained on a relatively small dataset, its performance may be limited. For better results on specific tasks, fine-tuning on a relevant dataset is recommended.
checkpoint-10000/config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "roberta",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 1,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.53.0.dev0",
23
+ "type_vocab_size": 1,
24
+ "use_cache": true,
25
+ "vocab_size": 52000
26
+ }
checkpoint-10000/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-10000/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d60e2e32c32f8c40e804d7c353c374085847227e5f00da50eb4762662496e1b
3
+ size 334030264
checkpoint-10000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32e6396ae51291f9b22bdbc3a863dfc5068ffc2d447a218764093495ed119994
3
+ size 668124683
checkpoint-10000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:972c860353848b9bc0947f70085405424a6c794b8a204e40286ae3c69298208b
3
+ size 14645
checkpoint-10000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cbb445dc162b5dd4346ca2fb80abebbbf76da9a7736009b794b959a33f32116
3
+ size 1465
checkpoint-10000/special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
checkpoint-10000/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-10000/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_len": 512,
53
+ "model_max_length": 512,
54
+ "pad_token": "<pad>",
55
+ "sep_token": "</s>",
56
+ "tokenizer_class": "RobertaTokenizer",
57
+ "trim_offsets": true,
58
+ "unk_token": "<unk>"
59
+ }
checkpoint-10000/trainer_state.json ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.6566850538481744,
6
+ "eval_steps": 500,
7
+ "global_step": 10000,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.03283425269240872,
14
+ "grad_norm": 2.0115110874176025,
15
+ "learning_rate": 4.836157079064881e-05,
16
+ "loss": 7.8535,
17
+ "step": 500
18
+ },
19
+ {
20
+ "epoch": 0.06566850538481744,
21
+ "grad_norm": 1.8765583038330078,
22
+ "learning_rate": 4.671985815602837e-05,
23
+ "loss": 7.2604,
24
+ "step": 1000
25
+ },
26
+ {
27
+ "epoch": 0.09850275807722617,
28
+ "grad_norm": 1.813555121421814,
29
+ "learning_rate": 4.507814552140793e-05,
30
+ "loss": 7.0768,
31
+ "step": 1500
32
+ },
33
+ {
34
+ "epoch": 0.1313370107696349,
35
+ "grad_norm": 2.100990056991577,
36
+ "learning_rate": 4.34364328867875e-05,
37
+ "loss": 6.9514,
38
+ "step": 2000
39
+ },
40
+ {
41
+ "epoch": 0.1641712634620436,
42
+ "grad_norm": 2.4078993797302246,
43
+ "learning_rate": 4.1794720252167065e-05,
44
+ "loss": 6.8599,
45
+ "step": 2500
46
+ },
47
+ {
48
+ "epoch": 0.19700551615445233,
49
+ "grad_norm": 1.9332386255264282,
50
+ "learning_rate": 4.015300761754663e-05,
51
+ "loss": 6.8066,
52
+ "step": 3000
53
+ },
54
+ {
55
+ "epoch": 0.22983976884686105,
56
+ "grad_norm": 2.384559154510498,
57
+ "learning_rate": 3.8511294982926185e-05,
58
+ "loss": 6.7556,
59
+ "step": 3500
60
+ },
61
+ {
62
+ "epoch": 0.2626740215392698,
63
+ "grad_norm": 2.159532308578491,
64
+ "learning_rate": 3.6869582348305756e-05,
65
+ "loss": 6.7005,
66
+ "step": 4000
67
+ },
68
+ {
69
+ "epoch": 0.29550827423167847,
70
+ "grad_norm": 2.3822381496429443,
71
+ "learning_rate": 3.522786971368532e-05,
72
+ "loss": 6.6413,
73
+ "step": 4500
74
+ },
75
+ {
76
+ "epoch": 0.3283425269240872,
77
+ "grad_norm": 2.6890079975128174,
78
+ "learning_rate": 3.3586157079064884e-05,
79
+ "loss": 6.5969,
80
+ "step": 5000
81
+ },
82
+ {
83
+ "epoch": 0.3611767796164959,
84
+ "grad_norm": 2.734480857849121,
85
+ "learning_rate": 3.194444444444444e-05,
86
+ "loss": 6.5226,
87
+ "step": 5500
88
+ },
89
+ {
90
+ "epoch": 0.39401103230890466,
91
+ "grad_norm": 3.228806257247925,
92
+ "learning_rate": 3.0302731809824008e-05,
93
+ "loss": 6.4609,
94
+ "step": 6000
95
+ },
96
+ {
97
+ "epoch": 0.42684528500131336,
98
+ "grad_norm": 3.1673390865325928,
99
+ "learning_rate": 2.8661019175203575e-05,
100
+ "loss": 6.3505,
101
+ "step": 6500
102
+ },
103
+ {
104
+ "epoch": 0.4596795376937221,
105
+ "grad_norm": 3.190369129180908,
106
+ "learning_rate": 2.701930654058314e-05,
107
+ "loss": 6.2476,
108
+ "step": 7000
109
+ },
110
+ {
111
+ "epoch": 0.4925137903861308,
112
+ "grad_norm": 3.3769636154174805,
113
+ "learning_rate": 2.53775939059627e-05,
114
+ "loss": 6.1132,
115
+ "step": 7500
116
+ },
117
+ {
118
+ "epoch": 0.5253480430785396,
119
+ "grad_norm": 3.465238332748413,
120
+ "learning_rate": 2.3735881271342264e-05,
121
+ "loss": 5.9959,
122
+ "step": 8000
123
+ },
124
+ {
125
+ "epoch": 0.5581822957709482,
126
+ "grad_norm": 3.3762824535369873,
127
+ "learning_rate": 2.209416863672183e-05,
128
+ "loss": 5.8872,
129
+ "step": 8500
130
+ },
131
+ {
132
+ "epoch": 0.5910165484633569,
133
+ "grad_norm": 3.428150177001953,
134
+ "learning_rate": 2.0452456002101395e-05,
135
+ "loss": 5.7939,
136
+ "step": 9000
137
+ },
138
+ {
139
+ "epoch": 0.6238508011557657,
140
+ "grad_norm": 3.5672378540039062,
141
+ "learning_rate": 1.881074336748096e-05,
142
+ "loss": 5.7013,
143
+ "step": 9500
144
+ },
145
+ {
146
+ "epoch": 0.6566850538481744,
147
+ "grad_norm": 3.6406631469726562,
148
+ "learning_rate": 1.7169030732860522e-05,
149
+ "loss": 5.6368,
150
+ "step": 10000
151
+ }
152
+ ],
153
+ "logging_steps": 500,
154
+ "max_steps": 15228,
155
+ "num_input_tokens_seen": 0,
156
+ "num_train_epochs": 1,
157
+ "save_steps": 10000,
158
+ "stateful_callbacks": {
159
+ "TrainerControl": {
160
+ "args": {
161
+ "should_epoch_stop": false,
162
+ "should_evaluate": false,
163
+ "should_log": false,
164
+ "should_save": true,
165
+ "should_training_stop": false
166
+ },
167
+ "attributes": {}
168
+ }
169
+ },
170
+ "total_flos": 2.122034184192e+16,
171
+ "train_batch_size": 64,
172
+ "trial_name": null,
173
+ "trial_params": null
174
+ }
checkpoint-10000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:361c27590c72754ba8ac5e3b227371e4ed2881a639738e66b897fd307c2a9ced
3
+ size 5649
checkpoint-10000/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-15228/config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "roberta",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 1,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.53.0.dev0",
23
+ "type_vocab_size": 1,
24
+ "use_cache": true,
25
+ "vocab_size": 52000
26
+ }
checkpoint-15228/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-15228/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d53dfb0c238e7b34f9694a39a127ac66ca5c144a09604945d7e92d77a1655005
3
+ size 334030264
checkpoint-15228/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f90766f63e48fda9391eba89db11452e23ce7facd101047c859652aae33c0bac
3
+ size 668124683
checkpoint-15228/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee3813d6e83c780347a7cc1f319e59174774e111e03a0945d35bc883d30a5776
3
+ size 14645
checkpoint-15228/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5727ed166a6972bfdc70485e858975d59802e964f76812631eb37007bea9a0ff
3
+ size 1465
checkpoint-15228/special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
checkpoint-15228/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-15228/tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_len": 512,
53
+ "model_max_length": 512,
54
+ "pad_token": "<pad>",
55
+ "sep_token": "</s>",
56
+ "tokenizer_class": "RobertaTokenizer",
57
+ "trim_offsets": true,
58
+ "unk_token": "<unk>"
59
+ }
checkpoint-15228/trainer_state.json ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 1.0,
6
+ "eval_steps": 500,
7
+ "global_step": 15228,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.03283425269240872,
14
+ "grad_norm": 2.0115110874176025,
15
+ "learning_rate": 4.836157079064881e-05,
16
+ "loss": 7.8535,
17
+ "step": 500
18
+ },
19
+ {
20
+ "epoch": 0.06566850538481744,
21
+ "grad_norm": 1.8765583038330078,
22
+ "learning_rate": 4.671985815602837e-05,
23
+ "loss": 7.2604,
24
+ "step": 1000
25
+ },
26
+ {
27
+ "epoch": 0.09850275807722617,
28
+ "grad_norm": 1.813555121421814,
29
+ "learning_rate": 4.507814552140793e-05,
30
+ "loss": 7.0768,
31
+ "step": 1500
32
+ },
33
+ {
34
+ "epoch": 0.1313370107696349,
35
+ "grad_norm": 2.100990056991577,
36
+ "learning_rate": 4.34364328867875e-05,
37
+ "loss": 6.9514,
38
+ "step": 2000
39
+ },
40
+ {
41
+ "epoch": 0.1641712634620436,
42
+ "grad_norm": 2.4078993797302246,
43
+ "learning_rate": 4.1794720252167065e-05,
44
+ "loss": 6.8599,
45
+ "step": 2500
46
+ },
47
+ {
48
+ "epoch": 0.19700551615445233,
49
+ "grad_norm": 1.9332386255264282,
50
+ "learning_rate": 4.015300761754663e-05,
51
+ "loss": 6.8066,
52
+ "step": 3000
53
+ },
54
+ {
55
+ "epoch": 0.22983976884686105,
56
+ "grad_norm": 2.384559154510498,
57
+ "learning_rate": 3.8511294982926185e-05,
58
+ "loss": 6.7556,
59
+ "step": 3500
60
+ },
61
+ {
62
+ "epoch": 0.2626740215392698,
63
+ "grad_norm": 2.159532308578491,
64
+ "learning_rate": 3.6869582348305756e-05,
65
+ "loss": 6.7005,
66
+ "step": 4000
67
+ },
68
+ {
69
+ "epoch": 0.29550827423167847,
70
+ "grad_norm": 2.3822381496429443,
71
+ "learning_rate": 3.522786971368532e-05,
72
+ "loss": 6.6413,
73
+ "step": 4500
74
+ },
75
+ {
76
+ "epoch": 0.3283425269240872,
77
+ "grad_norm": 2.6890079975128174,
78
+ "learning_rate": 3.3586157079064884e-05,
79
+ "loss": 6.5969,
80
+ "step": 5000
81
+ },
82
+ {
83
+ "epoch": 0.3611767796164959,
84
+ "grad_norm": 2.734480857849121,
85
+ "learning_rate": 3.194444444444444e-05,
86
+ "loss": 6.5226,
87
+ "step": 5500
88
+ },
89
+ {
90
+ "epoch": 0.39401103230890466,
91
+ "grad_norm": 3.228806257247925,
92
+ "learning_rate": 3.0302731809824008e-05,
93
+ "loss": 6.4609,
94
+ "step": 6000
95
+ },
96
+ {
97
+ "epoch": 0.42684528500131336,
98
+ "grad_norm": 3.1673390865325928,
99
+ "learning_rate": 2.8661019175203575e-05,
100
+ "loss": 6.3505,
101
+ "step": 6500
102
+ },
103
+ {
104
+ "epoch": 0.4596795376937221,
105
+ "grad_norm": 3.190369129180908,
106
+ "learning_rate": 2.701930654058314e-05,
107
+ "loss": 6.2476,
108
+ "step": 7000
109
+ },
110
+ {
111
+ "epoch": 0.4925137903861308,
112
+ "grad_norm": 3.3769636154174805,
113
+ "learning_rate": 2.53775939059627e-05,
114
+ "loss": 6.1132,
115
+ "step": 7500
116
+ },
117
+ {
118
+ "epoch": 0.5253480430785396,
119
+ "grad_norm": 3.465238332748413,
120
+ "learning_rate": 2.3735881271342264e-05,
121
+ "loss": 5.9959,
122
+ "step": 8000
123
+ },
124
+ {
125
+ "epoch": 0.5581822957709482,
126
+ "grad_norm": 3.3762824535369873,
127
+ "learning_rate": 2.209416863672183e-05,
128
+ "loss": 5.8872,
129
+ "step": 8500
130
+ },
131
+ {
132
+ "epoch": 0.5910165484633569,
133
+ "grad_norm": 3.428150177001953,
134
+ "learning_rate": 2.0452456002101395e-05,
135
+ "loss": 5.7939,
136
+ "step": 9000
137
+ },
138
+ {
139
+ "epoch": 0.6238508011557657,
140
+ "grad_norm": 3.5672378540039062,
141
+ "learning_rate": 1.881074336748096e-05,
142
+ "loss": 5.7013,
143
+ "step": 9500
144
+ },
145
+ {
146
+ "epoch": 0.6566850538481744,
147
+ "grad_norm": 3.6406631469726562,
148
+ "learning_rate": 1.7169030732860522e-05,
149
+ "loss": 5.6368,
150
+ "step": 10000
151
+ },
152
+ {
153
+ "epoch": 0.6895193065405831,
154
+ "grad_norm": 3.694791555404663,
155
+ "learning_rate": 1.5527318098240086e-05,
156
+ "loss": 5.5591,
157
+ "step": 10500
158
+ },
159
+ {
160
+ "epoch": 0.7223535592329918,
161
+ "grad_norm": 3.98870587348938,
162
+ "learning_rate": 1.388560546361965e-05,
163
+ "loss": 5.4825,
164
+ "step": 11000
165
+ },
166
+ {
167
+ "epoch": 0.7551878119254006,
168
+ "grad_norm": 3.6506927013397217,
169
+ "learning_rate": 1.2243892828999212e-05,
170
+ "loss": 5.4542,
171
+ "step": 11500
172
+ },
173
+ {
174
+ "epoch": 0.7880220646178093,
175
+ "grad_norm": 3.9111599922180176,
176
+ "learning_rate": 1.0602180194378776e-05,
177
+ "loss": 5.3903,
178
+ "step": 12000
179
+ },
180
+ {
181
+ "epoch": 0.820856317310218,
182
+ "grad_norm": 3.6450743675231934,
183
+ "learning_rate": 8.96046755975834e-06,
184
+ "loss": 5.3594,
185
+ "step": 12500
186
+ },
187
+ {
188
+ "epoch": 0.8536905700026267,
189
+ "grad_norm": 3.8948936462402344,
190
+ "learning_rate": 7.318754925137904e-06,
191
+ "loss": 5.3447,
192
+ "step": 13000
193
+ },
194
+ {
195
+ "epoch": 0.8865248226950354,
196
+ "grad_norm": 3.537013292312622,
197
+ "learning_rate": 5.6770422905174684e-06,
198
+ "loss": 5.2947,
199
+ "step": 13500
200
+ },
201
+ {
202
+ "epoch": 0.9193590753874442,
203
+ "grad_norm": 3.3274927139282227,
204
+ "learning_rate": 4.035329655897032e-06,
205
+ "loss": 5.2935,
206
+ "step": 14000
207
+ },
208
+ {
209
+ "epoch": 0.9521933280798529,
210
+ "grad_norm": 3.2864270210266113,
211
+ "learning_rate": 2.3936170212765957e-06,
212
+ "loss": 5.2692,
213
+ "step": 14500
214
+ },
215
+ {
216
+ "epoch": 0.9850275807722616,
217
+ "grad_norm": 3.5905466079711914,
218
+ "learning_rate": 7.519043866561598e-07,
219
+ "loss": 5.2641,
220
+ "step": 15000
221
+ }
222
+ ],
223
+ "logging_steps": 500,
224
+ "max_steps": 15228,
225
+ "num_input_tokens_seen": 0,
226
+ "num_train_epochs": 1,
227
+ "save_steps": 10000,
228
+ "stateful_callbacks": {
229
+ "TrainerControl": {
230
+ "args": {
231
+ "should_epoch_stop": false,
232
+ "should_evaluate": false,
233
+ "should_log": false,
234
+ "should_save": true,
235
+ "should_training_stop": true
236
+ },
237
+ "attributes": {}
238
+ }
239
+ },
240
+ "total_flos": 3.231269529606144e+16,
241
+ "train_batch_size": 64,
242
+ "trial_name": null,
243
+ "trial_params": null
244
+ }
checkpoint-15228/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:361c27590c72754ba8ac5e3b227371e4ed2881a639738e66b897fd307c2a9ced
3
+ size 5649
checkpoint-15228/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "roberta",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 1,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.53.0.dev0",
23
+ "type_vocab_size": 1,
24
+ "use_cache": true,
25
+ "vocab_size": 52000
26
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d53dfb0c238e7b34f9694a39a127ac66ca5c144a09604945d7e92d77a1655005
3
+ size 334030264
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_len": 512,
53
+ "model_max_length": 512,
54
+ "pad_token": "<pad>",
55
+ "sep_token": "</s>",
56
+ "tokenizer_class": "RobertaTokenizer",
57
+ "trim_offsets": true,
58
+ "unk_token": "<unk>"
59
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:361c27590c72754ba8ac5e3b227371e4ed2881a639738e66b897fd307c2a9ced
3
+ size 5649
vocab.json ADDED
The diff for this file is too large to render. See raw diff