noahjax commited on
Commit
b90c7e1
·
verified ·
1 Parent(s): 787b523

Upload fine-tuned chart reranker model

Browse files
README.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - cross-encoder
5
+ - reranker
6
+ - generated_from_trainer
7
+ - dataset_size:6851
8
+ - loss:BinaryCrossEntropyLoss
9
+ base_model: cross-encoder/ms-marco-MiniLM-L6-v2
10
+ pipeline_tag: text-ranking
11
+ library_name: sentence-transformers
12
+ metrics:
13
+ - pearson
14
+ - spearman
15
+ model-index:
16
+ - name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2
17
+ results:
18
+ - task:
19
+ type: cross-encoder-correlation
20
+ name: Cross Encoder Correlation
21
+ dataset:
22
+ name: validation
23
+ type: validation
24
+ metrics:
25
+ - type: pearson
26
+ value: 0.6742730018723011
27
+ name: Pearson
28
+ - type: spearman
29
+ value: 0.5158175772359095
30
+ name: Spearman
31
+ ---
32
+
33
+ # CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2
34
+
35
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
36
+
37
+ ## Model Details
38
+
39
+ ### Model Description
40
+ - **Model Type:** Cross Encoder
41
+ - **Base model:** [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) <!-- at revision c5ee24cb16019beea0893ab7796b1df96625c6b8 -->
42
+ - **Maximum Sequence Length:** 512 tokens
43
+ - **Number of Output Labels:** 1 label
44
+ <!-- - **Training Dataset:** Unknown -->
45
+ <!-- - **Language:** Unknown -->
46
+ <!-- - **License:** Unknown -->
47
+
48
+ ### Model Sources
49
+
50
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
51
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
52
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
53
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
54
+
55
+ ## Usage
56
+
57
+ ### Direct Usage (Sentence Transformers)
58
+
59
+ First install the Sentence Transformers library:
60
+
61
+ ```bash
62
+ pip install -U sentence-transformers
63
+ ```
64
+
65
+ Then you can load this model and run inference.
66
+ ```python
67
+ from sentence_transformers import CrossEncoder
68
+
69
+ # Download from the 🤗 Hub
70
+ model = CrossEncoder("cross_encoder_model_id")
71
+ # Get scores for pairs of texts
72
+ pairs = [
73
+ ['According to a study by the Global Sustainable Tourism Council, by what percentage can sustainable tourism practices increase visitor satisfaction?', 'Title: "Life satisfaction, measured weekly (United Kingdom)"\n Collections: YouGov Trackers\n Datasets: YouGovTrackerValueV2\n Chart Type: survey:timeseries\n Sources: YouGov'],
74
+ ['Scoreline for Al‑Bayraq W vs Al‑Riyadh W (WFDL)', 'Title: "Grainger Overview, CBSE:IAM Overview"\n Collections: Companies\n Datasets: InstrumentClosePrice1Day\n Chart Type: timeseries'],
75
+ ["According to the article 'Top 3 Higher Education Trends to Watch in 2025' by Hanover Research, what percentage of prospective college students in the U.S. report feeling 'not at all familiar' or only 'slightly familiar' with the application process?", 'Title: "AirTanker Services Limited Percentage"\n Collections: Companies\n Chart Type: company_card\n Company: name=ATS Corporation, aliases=[\'ATS Automation Tooling Systems Inc.\', \'Ats Corp\', \'ATS\']\n Sources: S&P Global'],
76
+ ["When did RetailMeNot launch the '5 to Buy' event?", 'Title: "Art - past 3 months (United States)"\n Collections: YouGov Trackers\n Datasets: YouGovTrackerValueV2\n Chart Type: survey:timeseries\n Sources: YouGov]'],
77
+ ["When was the article '5 Key Trends To Shape Your Business Strategy For 2025' by IESE Business School published on Forbes?", 'Title: "Business Coach Overview"\n Collections: Companies\n Chart Type: company_card\n Company: name=Business Coach Inc., aliases=[\'Business Coach\']\n Sources: S&P Global'],
78
+ ]
79
+ scores = model.predict(pairs)
80
+ print(scores.shape)
81
+ # (5,)
82
+
83
+ # Or rank different texts based on similarity to a single text
84
+ ranks = model.rank(
85
+ 'According to a study by the Global Sustainable Tourism Council, by what percentage can sustainable tourism practices increase visitor satisfaction?',
86
+ [
87
+ 'Title: "Life satisfaction, measured weekly (United Kingdom)"\n Collections: YouGov Trackers\n Datasets: YouGovTrackerValueV2\n Chart Type: survey:timeseries\n Sources: YouGov',
88
+ 'Title: "Grainger Overview, CBSE:IAM Overview"\n Collections: Companies\n Datasets: InstrumentClosePrice1Day\n Chart Type: timeseries',
89
+ 'Title: "AirTanker Services Limited Percentage"\n Collections: Companies\n Chart Type: company_card\n Company: name=ATS Corporation, aliases=[\'ATS Automation Tooling Systems Inc.\', \'Ats Corp\', \'ATS\']\n Sources: S&P Global',
90
+ 'Title: "Art - past 3 months (United States)"\n Collections: YouGov Trackers\n Datasets: YouGovTrackerValueV2\n Chart Type: survey:timeseries\n Sources: YouGov]',
91
+ 'Title: "Business Coach Overview"\n Collections: Companies\n Chart Type: company_card\n Company: name=Business Coach Inc., aliases=[\'Business Coach\']\n Sources: S&P Global',
92
+ ]
93
+ )
94
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
95
+ ```
96
+
97
+ <!--
98
+ ### Direct Usage (Transformers)
99
+
100
+ <details><summary>Click to see the direct usage in Transformers</summary>
101
+
102
+ </details>
103
+ -->
104
+
105
+ <!--
106
+ ### Downstream Usage (Sentence Transformers)
107
+
108
+ You can finetune this model on your own dataset.
109
+
110
+ <details><summary>Click to expand</summary>
111
+
112
+ </details>
113
+ -->
114
+
115
+ <!--
116
+ ### Out-of-Scope Use
117
+
118
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
119
+ -->
120
+
121
+ ## Evaluation
122
+
123
+ ### Metrics
124
+
125
+ #### Cross Encoder Correlation
126
+
127
+ * Dataset: `validation`
128
+ * Evaluated with [<code>CrossEncoderCorrelationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderCorrelationEvaluator)
129
+
130
+ | Metric | Value |
131
+ |:-------------|:-----------|
132
+ | pearson | 0.6743 |
133
+ | **spearman** | **0.5158** |
134
+
135
+ <!--
136
+ ## Bias, Risks and Limitations
137
+
138
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
139
+ -->
140
+
141
+ <!--
142
+ ### Recommendations
143
+
144
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
145
+ -->
146
+
147
+ ## Training Details
148
+
149
+ ### Training Dataset
150
+
151
+ #### Unnamed Dataset
152
+
153
+ * Size: 6,851 training samples
154
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
155
+ * Approximate statistics based on the first 1000 samples:
156
+ | | sentence_0 | sentence_1 | label |
157
+ |:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
158
+ | type | string | string | float |
159
+ | details | <ul><li>min: 7 characters</li><li>mean: 99.0 characters</li><li>max: 2253 characters</li></ul> | <ul><li>min: 79 characters</li><li>mean: 184.27 characters</li><li>max: 716 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.06</li><li>max: 1.0</li></ul> |
160
+ * Samples:
161
+ | sentence_0 | sentence_1 | label |
162
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
163
+ | <code>According to a study by the Global Sustainable Tourism Council, by what percentage can sustainable tourism practices increase visitor satisfaction?</code> | <code>Title: "Life satisfaction, measured weekly (United Kingdom)"<br> Collections: YouGov Trackers<br> Datasets: YouGovTrackerValueV2<br> Chart Type: survey:timeseries<br> Sources: YouGov</code> | <code>0.0</code> |
164
+ | <code>Scoreline for Al‑Bayraq W vs Al‑Riyadh W (WFDL)</code> | <code>Title: "Grainger Overview, CBSE:IAM Overview"<br> Collections: Companies<br> Datasets: InstrumentClosePrice1Day<br> Chart Type: timeseries</code> | <code>0.0</code> |
165
+ | <code>According to the article 'Top 3 Higher Education Trends to Watch in 2025' by Hanover Research, what percentage of prospective college students in the U.S. report feeling 'not at all familiar' or only 'slightly familiar' with the application process?</code> | <code>Title: "AirTanker Services Limited Percentage"<br> Collections: Companies<br> Chart Type: company_card<br> Company: name=ATS Corporation, aliases=['ATS Automation Tooling Systems Inc.', 'Ats Corp', 'ATS']<br> Sources: S&P Global</code> | <code>0.0</code> |
166
+ * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
167
+ ```json
168
+ {
169
+ "activation_fn": "torch.nn.modules.linear.Identity",
170
+ "pos_weight": null
171
+ }
172
+ ```
173
+
174
+ ### Training Hyperparameters
175
+ #### Non-Default Hyperparameters
176
+
177
+ - `eval_strategy`: steps
178
+ - `per_device_train_batch_size`: 32
179
+ - `per_device_eval_batch_size`: 32
180
+ - `num_train_epochs`: 1
181
+
182
+ #### All Hyperparameters
183
+ <details><summary>Click to expand</summary>
184
+
185
+ - `overwrite_output_dir`: False
186
+ - `do_predict`: False
187
+ - `eval_strategy`: steps
188
+ - `prediction_loss_only`: True
189
+ - `per_device_train_batch_size`: 32
190
+ - `per_device_eval_batch_size`: 32
191
+ - `per_gpu_train_batch_size`: None
192
+ - `per_gpu_eval_batch_size`: None
193
+ - `gradient_accumulation_steps`: 1
194
+ - `eval_accumulation_steps`: None
195
+ - `torch_empty_cache_steps`: None
196
+ - `learning_rate`: 5e-05
197
+ - `weight_decay`: 0.0
198
+ - `adam_beta1`: 0.9
199
+ - `adam_beta2`: 0.999
200
+ - `adam_epsilon`: 1e-08
201
+ - `max_grad_norm`: 1
202
+ - `num_train_epochs`: 1
203
+ - `max_steps`: -1
204
+ - `lr_scheduler_type`: linear
205
+ - `lr_scheduler_kwargs`: {}
206
+ - `warmup_ratio`: 0.0
207
+ - `warmup_steps`: 0
208
+ - `log_level`: passive
209
+ - `log_level_replica`: warning
210
+ - `log_on_each_node`: True
211
+ - `logging_nan_inf_filter`: True
212
+ - `save_safetensors`: True
213
+ - `save_on_each_node`: False
214
+ - `save_only_model`: False
215
+ - `restore_callback_states_from_checkpoint`: False
216
+ - `no_cuda`: False
217
+ - `use_cpu`: False
218
+ - `use_mps_device`: False
219
+ - `seed`: 42
220
+ - `data_seed`: None
221
+ - `jit_mode_eval`: False
222
+ - `bf16`: False
223
+ - `fp16`: False
224
+ - `fp16_opt_level`: O1
225
+ - `half_precision_backend`: auto
226
+ - `bf16_full_eval`: False
227
+ - `fp16_full_eval`: False
228
+ - `tf32`: None
229
+ - `local_rank`: 0
230
+ - `ddp_backend`: None
231
+ - `tpu_num_cores`: None
232
+ - `tpu_metrics_debug`: False
233
+ - `debug`: []
234
+ - `dataloader_drop_last`: False
235
+ - `dataloader_num_workers`: 0
236
+ - `dataloader_prefetch_factor`: None
237
+ - `past_index`: -1
238
+ - `disable_tqdm`: False
239
+ - `remove_unused_columns`: True
240
+ - `label_names`: None
241
+ - `load_best_model_at_end`: False
242
+ - `ignore_data_skip`: False
243
+ - `fsdp`: []
244
+ - `fsdp_min_num_params`: 0
245
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
246
+ - `fsdp_transformer_layer_cls_to_wrap`: None
247
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
248
+ - `parallelism_config`: None
249
+ - `deepspeed`: None
250
+ - `label_smoothing_factor`: 0.0
251
+ - `optim`: adamw_torch_fused
252
+ - `optim_args`: None
253
+ - `adafactor`: False
254
+ - `group_by_length`: False
255
+ - `length_column_name`: length
256
+ - `project`: huggingface
257
+ - `trackio_space_id`: trackio
258
+ - `ddp_find_unused_parameters`: None
259
+ - `ddp_bucket_cap_mb`: None
260
+ - `ddp_broadcast_buffers`: False
261
+ - `dataloader_pin_memory`: True
262
+ - `dataloader_persistent_workers`: False
263
+ - `skip_memory_metrics`: True
264
+ - `use_legacy_prediction_loop`: False
265
+ - `push_to_hub`: False
266
+ - `resume_from_checkpoint`: None
267
+ - `hub_model_id`: None
268
+ - `hub_strategy`: every_save
269
+ - `hub_private_repo`: None
270
+ - `hub_always_push`: False
271
+ - `hub_revision`: None
272
+ - `gradient_checkpointing`: False
273
+ - `gradient_checkpointing_kwargs`: None
274
+ - `include_inputs_for_metrics`: False
275
+ - `include_for_metrics`: []
276
+ - `eval_do_concat_batches`: True
277
+ - `fp16_backend`: auto
278
+ - `push_to_hub_model_id`: None
279
+ - `push_to_hub_organization`: None
280
+ - `mp_parameters`:
281
+ - `auto_find_batch_size`: False
282
+ - `full_determinism`: False
283
+ - `torchdynamo`: None
284
+ - `ray_scope`: last
285
+ - `ddp_timeout`: 1800
286
+ - `torch_compile`: False
287
+ - `torch_compile_backend`: None
288
+ - `torch_compile_mode`: None
289
+ - `include_tokens_per_second`: False
290
+ - `include_num_input_tokens_seen`: no
291
+ - `neftune_noise_alpha`: None
292
+ - `optim_target_modules`: None
293
+ - `batch_eval_metrics`: False
294
+ - `eval_on_start`: False
295
+ - `use_liger_kernel`: False
296
+ - `liger_kernel_config`: None
297
+ - `eval_use_gather_object`: False
298
+ - `average_tokens_across_devices`: True
299
+ - `prompts`: None
300
+ - `batch_sampler`: batch_sampler
301
+ - `multi_dataset_batch_sampler`: proportional
302
+ - `router_mapping`: {}
303
+ - `learning_rate_mapping`: {}
304
+
305
+ </details>
306
+
307
+ ### Training Logs
308
+ | Epoch | Step | validation_spearman |
309
+ |:------:|:----:|:-------------------:|
310
+ | 0.2326 | 50 | 0.3960 |
311
+ | 0.4651 | 100 | 0.4804 |
312
+ | 0.6977 | 150 | 0.5031 |
313
+ | 0.9302 | 200 | 0.5158 |
314
+
315
+
316
+ ### Framework Versions
317
+ - Python: 3.10.14
318
+ - Sentence Transformers: 5.1.1
319
+ - Transformers: 4.57.1
320
+ - PyTorch: 2.9.0
321
+ - Accelerate: 1.10.1
322
+ - Datasets: 4.2.0
323
+ - Tokenizers: 0.22.1
324
+
325
+ ## Citation
326
+
327
+ ### BibTeX
328
+
329
+ #### Sentence Transformers
330
+ ```bibtex
331
+ @inproceedings{reimers-2019-sentence-bert,
332
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
333
+ author = "Reimers, Nils and Gurevych, Iryna",
334
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
335
+ month = "11",
336
+ year = "2019",
337
+ publisher = "Association for Computational Linguistics",
338
+ url = "https://arxiv.org/abs/1908.10084",
339
+ }
340
+ ```
341
+
342
+ <!--
343
+ ## Glossary
344
+
345
+ *Clearly define terms in order to be accessible across audiences.*
346
+ -->
347
+
348
+ <!--
349
+ ## Model Card Authors
350
+
351
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
352
+ -->
353
+
354
+ <!--
355
+ ## Model Card Contact
356
+
357
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
358
+ -->
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "dtype": "float32",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 1536,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 6,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "sentence_transformers": {
28
+ "activation_fn": "torch.nn.modules.linear.Identity",
29
+ "version": "5.1.1"
30
+ },
31
+ "transformers_version": "4.57.1",
32
+ "type_vocab_size": 2,
33
+ "use_cache": true,
34
+ "vocab_size": 30522
35
+ }
eval/CrossEncoderCorrelationEvaluator_validation_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ epoch,steps,Pearson_Correlation,Spearman_Correlation
2
+ 1.0,215,0.6765057885942694,0.5160340950125839
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04a6402495d00a3b98e802a2e1ce50fada050156fa8ac9906d5c561e9fd2aec2
3
+ size 90866412
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
training_info.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ Base Model: cross-encoder/ms-marco-MiniLM-L6-v2
2
+ Training Samples: 6851
3
+ Epochs: 1
4
+ Batch Size: 32
5
+ Learning Rate: 2e-05
6
+ Max Length: 512
vocab.txt ADDED
The diff for this file is too large to render. See raw diff