dpshade22 commited on
Commit
96e145f
·
verified ·
1 Parent(s): 84e3be1

Upload e5-base-bible-50 embedding model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:70323
9
+ - loss:CosineSimilarityLoss
10
+ base_model: intfloat/e5-base-v2
11
+ widget:
12
+ - source_sentence: 'Birth of Cainan | participants: cainan_534, enos_1193'
13
+ sentences:
14
+ - The mother of Sisera looked out at a window, and cried through the lattice, Why
15
+ is his chariot so long in coming? why tarry the wheels of his chariots?
16
+ - 'Therefore, behold, the days come, that I will do judgment upon the graven images
17
+ of Babylon: and her whole land shall be confounded, and all her slain shall fall
18
+ in the midst of her.'
19
+ - Which was the son of Mathusala, which was the son of Enoch, which was the son
20
+ of Jared, which was the son of Maleleel, which was the son of Cainan,
21
+ - source_sentence: 'Jerusalem Council | participants: silas_2740, judas_1759, james_719,
22
+ peter_2745, barnabas_1722, paul_2479'
23
+ sentences:
24
+ - What ailed thee, O thou sea, that thou fleddest? thou Jordan, that thou wast driven
25
+ back?
26
+ - We have sent therefore Judas and Silas, who shall also tell you the same things
27
+ by mouth.
28
+ - 'The Spirit itself beareth witness with our spirit, that we are the children of
29
+ God:'
30
+ - source_sentence: But he that is married careth for the things that are of the world,
31
+ how he may please his wife.
32
+ sentences:
33
+ - But she had brought them up to the roof of the house, and hid them with the stalks
34
+ of flax, which she had laid in order upon the roof.
35
+ - And their whole body, and their backs, and their hands, and their wings, and the
36
+ wheels, were full of eyes round about, even the wheels that they four had.
37
+ - 'There is difference also between a wife and a virgin. The unmarried woman careth
38
+ for the things of the Lord, that she may be holy both in body and in spirit: but
39
+ she that is married careth for the things of the world, how she may please her
40
+ husband.'
41
+ - source_sentence: And the little owl, and the cormorant, and the great owl,
42
+ sentences:
43
+ - And the swan, and the pelican, and the gier eagle,
44
+ - Take Aaron and his sons with him, and the garments, and the anointing oil, and
45
+ a bullock for the sin offering, and two rams, and a basket of unleavened bread;
46
+ - 'And his power shall be mighty, but not by his own power: and he shall destroy
47
+ wonderfully, and shall prosper, and practise, and shall destroy the mighty and
48
+ the holy people.'
49
+ - source_sentence: John's Witness
50
+ sentences:
51
+ - And they asked him, and said unto him, Why baptizest thou then, if thou be not
52
+ that Christ, nor Elias, neither that prophet?
53
+ - Then I took Jaazaniah the son of Jeremiah, the son of Habaziniah, and his brethren,
54
+ and all his sons, and the whole house of the Rechabites;
55
+ - 'But he turned, and said unto Peter, Get thee behind me, Satan: thou art an offence
56
+ unto me: for thou savourest not the things that be of God, but those that be of
57
+ men.'
58
+ pipeline_tag: sentence-similarity
59
+ library_name: sentence-transformers
60
+ ---
61
+
62
+ # SentenceTransformer based on intfloat/e5-base-v2
63
+
64
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/e5-base-v2](https://huggingface.co/intfloat/e5-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
65
+
66
+ ## Model Details
67
+
68
+ ### Model Description
69
+ - **Model Type:** Sentence Transformer
70
+ - **Base model:** [intfloat/e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) <!-- at revision f52bf8ec8c7124536f0efb74aca902b2995e5bcd -->
71
+ - **Maximum Sequence Length:** 128 tokens
72
+ - **Output Dimensionality:** 768 dimensions
73
+ - **Similarity Function:** Cosine Similarity
74
+ <!-- - **Training Dataset:** Unknown -->
75
+ <!-- - **Language:** Unknown -->
76
+ <!-- - **License:** Unknown -->
77
+
78
+ ### Model Sources
79
+
80
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
81
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
82
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
83
+
84
+ ### Full Model Architecture
85
+
86
+ ```
87
+ SentenceTransformer(
88
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
89
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
90
+ (2): Normalize()
91
+ )
92
+ ```
93
+
94
+ ## Usage
95
+
96
+ ### Direct Usage (Sentence Transformers)
97
+
98
+ First install the Sentence Transformers library:
99
+
100
+ ```bash
101
+ pip install -U sentence-transformers
102
+ ```
103
+
104
+ Then you can load this model and run inference.
105
+ ```python
106
+ from sentence_transformers import SentenceTransformer
107
+
108
+ # Download from the 🤗 Hub
109
+ model = SentenceTransformer("sentence_transformers_model_id")
110
+ # Run inference
111
+ sentences = [
112
+ "John's Witness",
113
+ 'And they asked him, and said unto him, Why baptizest thou then, if thou be not that Christ, nor Elias, neither that prophet?',
114
+ 'Then I took Jaazaniah the son of Jeremiah, the son of Habaziniah, and his brethren, and all his sons, and the whole house of the Rechabites;',
115
+ ]
116
+ embeddings = model.encode(sentences)
117
+ print(embeddings.shape)
118
+ # [3, 768]
119
+
120
+ # Get the similarity scores for the embeddings
121
+ similarities = model.similarity(embeddings, embeddings)
122
+ print(similarities)
123
+ # tensor([[1.0000, 0.7469, 0.7488],
124
+ # [0.7469, 1.0000, 0.8236],
125
+ # [0.7488, 0.8236, 1.0000]])
126
+ ```
127
+
128
+ <!--
129
+ ### Direct Usage (Transformers)
130
+
131
+ <details><summary>Click to see the direct usage in Transformers</summary>
132
+
133
+ </details>
134
+ -->
135
+
136
+ <!--
137
+ ### Downstream Usage (Sentence Transformers)
138
+
139
+ You can finetune this model on your own dataset.
140
+
141
+ <details><summary>Click to expand</summary>
142
+
143
+ </details>
144
+ -->
145
+
146
+ <!--
147
+ ### Out-of-Scope Use
148
+
149
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
150
+ -->
151
+
152
+ <!--
153
+ ## Bias, Risks and Limitations
154
+
155
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
156
+ -->
157
+
158
+ <!--
159
+ ### Recommendations
160
+
161
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
162
+ -->
163
+
164
+ ## Training Details
165
+
166
+ ### Training Dataset
167
+
168
+ #### Unnamed Dataset
169
+
170
+ * Size: 70,323 training samples
171
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
172
+ * Approximate statistics based on the first 1000 samples:
173
+ | | sentence_0 | sentence_1 | label |
174
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
175
+ | type | string | string | float |
176
+ | details | <ul><li>min: 3 tokens</li><li>mean: 53.54 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 35.99 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.99</li><li>max: 1.0</li></ul> |
177
+ * Samples:
178
+ | sentence_0 | sentence_1 | label |
179
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
180
+ | <code>Prophecies of Jeremiah \| participants: jeremiah_853</code> | <code>In his days Judah shall be saved, and Israel shall dwell safely: and this is his name whereby he shall be called, The Lord Our Righteousness.</code> | <code>1.0</code> |
181
+ | <code>God: (A.S. and Dutch God; Dan. Gud; Ger. Gott), the name of the Divine Being. It is the rendering (1) of the Hebrew <i> 'El</i> , from a word meaning to be strong; (2) of <i> 'Eloah_, plural _'Elohim</i> . The singular form, <i> Eloah</i> , is used only in poetry. The plural form is more commonly used in all parts of the Bible, The Hebrew word Jehovah (q.v.), the only other word generally employed to denote the Supreme Being, is uniformly rendered in the Authorized Version by "LORD," printed in small capitals. The existence of God is taken for granted in the Bible. There is nowhere any argument to prove it. He who disbelieves this truth is spoken of as one devoid of understanding ( Psalms 14:1 ). The arguments generally adduced by theologians in proof of the being of God are: <li> The a priori argument, which is the testimony afforded by reason. <li> The a posteriori argument, by which we proceed logically from the facts of experience to causes. These arguments are, (a) T...</code> | <code>And if ye offer the blind for sacrifice, is it not evil? and if ye offer the lame and sick, is it not evil? offer it now unto thy governor; will he be pleased with thee, or accept thy person? saith the Lord of hosts.</code> | <code>1.0</code> |
182
+ | <code>Holy Week</code> | <code>For in those days shall be affliction, such as was not from the beginning of the creation which God created unto this time, neither shall be.</code> | <code>1.0</code> |
183
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
184
+ ```json
185
+ {
186
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
187
+ }
188
+ ```
189
+
190
+ ### Training Hyperparameters
191
+ #### Non-Default Hyperparameters
192
+
193
+ - `num_train_epochs`: 1
194
+ - `max_steps`: 50
195
+ - `multi_dataset_batch_sampler`: round_robin
196
+
197
+ #### All Hyperparameters
198
+ <details><summary>Click to expand</summary>
199
+
200
+ - `overwrite_output_dir`: False
201
+ - `do_predict`: False
202
+ - `eval_strategy`: no
203
+ - `prediction_loss_only`: True
204
+ - `per_device_train_batch_size`: 8
205
+ - `per_device_eval_batch_size`: 8
206
+ - `per_gpu_train_batch_size`: None
207
+ - `per_gpu_eval_batch_size`: None
208
+ - `gradient_accumulation_steps`: 1
209
+ - `eval_accumulation_steps`: None
210
+ - `torch_empty_cache_steps`: None
211
+ - `learning_rate`: 5e-05
212
+ - `weight_decay`: 0.0
213
+ - `adam_beta1`: 0.9
214
+ - `adam_beta2`: 0.999
215
+ - `adam_epsilon`: 1e-08
216
+ - `max_grad_norm`: 1
217
+ - `num_train_epochs`: 1
218
+ - `max_steps`: 50
219
+ - `lr_scheduler_type`: linear
220
+ - `lr_scheduler_kwargs`: None
221
+ - `warmup_ratio`: 0.0
222
+ - `warmup_steps`: 0
223
+ - `log_level`: passive
224
+ - `log_level_replica`: warning
225
+ - `log_on_each_node`: True
226
+ - `logging_nan_inf_filter`: True
227
+ - `save_safetensors`: True
228
+ - `save_on_each_node`: False
229
+ - `save_only_model`: False
230
+ - `restore_callback_states_from_checkpoint`: False
231
+ - `no_cuda`: False
232
+ - `use_cpu`: False
233
+ - `use_mps_device`: False
234
+ - `seed`: 42
235
+ - `data_seed`: None
236
+ - `jit_mode_eval`: False
237
+ - `bf16`: False
238
+ - `fp16`: False
239
+ - `fp16_opt_level`: O1
240
+ - `half_precision_backend`: auto
241
+ - `bf16_full_eval`: False
242
+ - `fp16_full_eval`: False
243
+ - `tf32`: None
244
+ - `local_rank`: 0
245
+ - `ddp_backend`: None
246
+ - `tpu_num_cores`: None
247
+ - `tpu_metrics_debug`: False
248
+ - `debug`: []
249
+ - `dataloader_drop_last`: False
250
+ - `dataloader_num_workers`: 0
251
+ - `dataloader_prefetch_factor`: None
252
+ - `past_index`: -1
253
+ - `disable_tqdm`: False
254
+ - `remove_unused_columns`: True
255
+ - `label_names`: None
256
+ - `load_best_model_at_end`: False
257
+ - `ignore_data_skip`: False
258
+ - `fsdp`: []
259
+ - `fsdp_min_num_params`: 0
260
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
261
+ - `fsdp_transformer_layer_cls_to_wrap`: None
262
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
263
+ - `parallelism_config`: None
264
+ - `deepspeed`: None
265
+ - `label_smoothing_factor`: 0.0
266
+ - `optim`: adamw_torch_fused
267
+ - `optim_args`: None
268
+ - `adafactor`: False
269
+ - `group_by_length`: False
270
+ - `length_column_name`: length
271
+ - `project`: huggingface
272
+ - `trackio_space_id`: trackio
273
+ - `ddp_find_unused_parameters`: None
274
+ - `ddp_bucket_cap_mb`: None
275
+ - `ddp_broadcast_buffers`: False
276
+ - `dataloader_pin_memory`: True
277
+ - `dataloader_persistent_workers`: False
278
+ - `skip_memory_metrics`: True
279
+ - `use_legacy_prediction_loop`: False
280
+ - `push_to_hub`: False
281
+ - `resume_from_checkpoint`: None
282
+ - `hub_model_id`: None
283
+ - `hub_strategy`: every_save
284
+ - `hub_private_repo`: None
285
+ - `hub_always_push`: False
286
+ - `hub_revision`: None
287
+ - `gradient_checkpointing`: False
288
+ - `gradient_checkpointing_kwargs`: None
289
+ - `include_inputs_for_metrics`: False
290
+ - `include_for_metrics`: []
291
+ - `eval_do_concat_batches`: True
292
+ - `fp16_backend`: auto
293
+ - `push_to_hub_model_id`: None
294
+ - `push_to_hub_organization`: None
295
+ - `mp_parameters`:
296
+ - `auto_find_batch_size`: False
297
+ - `full_determinism`: False
298
+ - `torchdynamo`: None
299
+ - `ray_scope`: last
300
+ - `ddp_timeout`: 1800
301
+ - `torch_compile`: False
302
+ - `torch_compile_backend`: None
303
+ - `torch_compile_mode`: None
304
+ - `include_tokens_per_second`: False
305
+ - `include_num_input_tokens_seen`: no
306
+ - `neftune_noise_alpha`: None
307
+ - `optim_target_modules`: None
308
+ - `batch_eval_metrics`: False
309
+ - `eval_on_start`: False
310
+ - `use_liger_kernel`: False
311
+ - `liger_kernel_config`: None
312
+ - `eval_use_gather_object`: False
313
+ - `average_tokens_across_devices`: True
314
+ - `prompts`: None
315
+ - `batch_sampler`: batch_sampler
316
+ - `multi_dataset_batch_sampler`: round_robin
317
+ - `router_mapping`: {}
318
+ - `learning_rate_mapping`: {}
319
+
320
+ </details>
321
+
322
+ ### Framework Versions
323
+ - Python: 3.11.14
324
+ - Sentence Transformers: 5.2.0
325
+ - Transformers: 4.57.6
326
+ - PyTorch: 2.10.0+cpu
327
+ - Accelerate: 1.12.0
328
+ - Datasets: 4.5.0
329
+ - Tokenizers: 0.22.2
330
+
331
+ ## Citation
332
+
333
+ ### BibTeX
334
+
335
+ #### Sentence Transformers
336
+ ```bibtex
337
+ @inproceedings{reimers-2019-sentence-bert,
338
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
339
+ author = "Reimers, Nils and Gurevych, Iryna",
340
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
341
+ month = "11",
342
+ year = "2019",
343
+ publisher = "Association for Computational Linguistics",
344
+ url = "https://arxiv.org/abs/1908.10084",
345
+ }
346
+ ```
347
+
348
+ <!--
349
+ ## Glossary
350
+
351
+ *Clearly define terms in order to be accessible across audiences.*
352
+ -->
353
+
354
+ <!--
355
+ ## Model Card Authors
356
+
357
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
358
+ -->
359
+
360
+ <!--
361
+ ## Model Card Contact
362
+
363
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
364
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "dtype": "float32",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "transformers_version": "4.57.6",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.2.0",
5
+ "transformers": "4.57.6",
6
+ "pytorch": "2.10.0+cpu"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09b9f26ca43e3f61f2e611ad35eabd72c071023d1a47e86dfe6d3e861d681fec
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff