reboo13 commited on
Commit
ff76e1f
·
verified ·
1 Parent(s): 2d2bea2

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": true,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,432 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ - dense
9
+ - generated_from_trainer
10
+ - dataset_size:106628
11
+ - loss:MultipleNegativesRankingLoss
12
+ base_model: Qwen/Qwen3-Embedding-0.6B
13
+ widget:
14
+ - source_sentence: ace-v
15
+ sentences:
16
+ - The floor plan was drafted at 1/4 inch scale where each quarter inch equals one
17
+ foot.
18
+ - Fingerprint examiners follow the ACE-V methodology for identification.
19
+ - Most modern streaming services offer content in 1080p full HD quality.
20
+ - source_sentence: adult learner
21
+ sentences:
22
+ - The adult learner brings valuable life experience to the classroom.
23
+ - Accounts payable represents money owed to suppliers and vendors.
24
+ - The inspection confirmed all above grade work met code requirements.
25
+ - source_sentence: 1/4 inch scale
26
+ sentences:
27
+ - Precise adjustments require accurate action gauge readings.
28
+ - The quality inspector identified adhesion failure in the sample.
29
+ - The architect created drawings at 1/4 inch scale for the client presentation.
30
+ - source_sentence: acrylic paint
31
+ sentences:
32
+ - Artists prefer acrylic paint for its fast drying time.
33
+ - The company reported strong adjusted EBITDA growth this quarter.
34
+ - The clinic specializes in adolescent health services.
35
+ - source_sentence: adult learning
36
+ sentences:
37
+ - Solar developers calculate AEP, or annual energy production.
38
+ - The course was designed using adult learning best practices.
39
+ - The wizard cast Abi-Dalzim's horrid wilting, draining moisture from enemies.
40
+ datasets:
41
+ - electroglyph/technical
42
+ pipeline_tag: sentence-similarity
43
+ library_name: sentence-transformers
44
+ ---
45
+
46
+ # SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
47
+
48
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [technical](https://huggingface.co/datasets/electroglyph/technical) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
49
+
50
+ ## Model Details
51
+
52
+ ### Model Description
53
+ - **Model Type:** Sentence Transformer
54
+ - **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision c54f2e6e80b2d7b7de06f51cec4959f6b3e03418 -->
55
+ - **Maximum Sequence Length:** 512 tokens
56
+ - **Output Dimensionality:** 1024 dimensions
57
+ - **Similarity Function:** Cosine Similarity
58
+ - **Training Dataset:**
59
+ - [technical](https://huggingface.co/datasets/electroglyph/technical)
60
+ - **Language:** en
61
+ <!-- - **License:** Unknown -->
62
+
63
+ ### Model Sources
64
+
65
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
66
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
67
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
68
+
69
+ ### Full Model Architecture
70
+
71
+ ```
72
+ SentenceTransformer(
73
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'PeftModelForFeatureExtraction'})
74
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
75
+ (2): Normalize()
76
+ )
77
+ ```
78
+
79
+ ## Usage
80
+
81
+ ### Direct Usage (Sentence Transformers)
82
+
83
+ First install the Sentence Transformers library:
84
+
85
+ ```bash
86
+ pip install -U sentence-transformers
87
+ ```
88
+
89
+ Then you can load this model and run inference.
90
+ ```python
91
+ from sentence_transformers import SentenceTransformer
92
+
93
+ # Download from the 🤗 Hub
94
+ model = SentenceTransformer("reboo13/ad")
95
+ # Run inference
96
+ sentences = [
97
+ 'adult learning',
98
+ 'The course was designed using adult learning best practices.',
99
+ 'Solar developers calculate AEP, or annual energy production.',
100
+ ]
101
+ embeddings = model.encode(sentences)
102
+ print(embeddings.shape)
103
+ # [3, 1024]
104
+
105
+ # Get the similarity scores for the embeddings
106
+ similarities = model.similarity(embeddings, embeddings)
107
+ print(similarities)
108
+ # tensor([[1.0000, 0.6213, 0.1227],
109
+ # [0.6213, 1.0000, 0.1474],
110
+ # [0.1227, 0.1474, 1.0000]])
111
+ ```
112
+
113
+ <!--
114
+ ### Direct Usage (Transformers)
115
+
116
+ <details><summary>Click to see the direct usage in Transformers</summary>
117
+
118
+ </details>
119
+ -->
120
+
121
+ <!--
122
+ ### Downstream Usage (Sentence Transformers)
123
+
124
+ You can finetune this model on your own dataset.
125
+
126
+ <details><summary>Click to expand</summary>
127
+
128
+ </details>
129
+ -->
130
+
131
+ <!--
132
+ ### Out-of-Scope Use
133
+
134
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
135
+ -->
136
+
137
+ <!--
138
+ ## Bias, Risks and Limitations
139
+
140
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
141
+ -->
142
+
143
+ <!--
144
+ ### Recommendations
145
+
146
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
147
+ -->
148
+
149
+ ## Training Details
150
+
151
+ ### Training Dataset
152
+
153
+ #### technical
154
+
155
+ * Dataset: [technical](https://huggingface.co/datasets/electroglyph/technical) at [05eeb90](https://huggingface.co/datasets/electroglyph/technical/tree/05eeb90e13d6bca725a5888f1ba206b2878f9c97)
156
+ * Size: 106,628 training samples
157
+ * Columns: <code>anchor</code> and <code>positive</code>
158
+ * Approximate statistics based on the first 1000 samples:
159
+ | | anchor | positive |
160
+ |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
161
+ | type | string | string |
162
+ | details | <ul><li>min: 2 tokens</li><li>mean: 3.83 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 12.66 tokens</li><li>max: 23 tokens</li></ul> |
163
+ * Samples:
164
+ | anchor | positive |
165
+ |:------------------|:----------------------------------------------------------------------------------------------------|
166
+ | <code>.308</code> | <code>The .308 Winchester is a popular rifle cartridge used for hunting and target shooting.</code> |
167
+ | <code>.308</code> | <code>Many precision rifles are chambered in .308 for its excellent long-range accuracy.</code> |
168
+ | <code>.308</code> | <code>The sniper selected a .308 caliber round for the mission.</code> |
169
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
170
+ ```json
171
+ {
172
+ "scale": 20.0,
173
+ "similarity_fct": "cos_sim",
174
+ "gather_across_devices": false
175
+ }
176
+ ```
177
+
178
+ ### Training Hyperparameters
179
+ #### Non-Default Hyperparameters
180
+
181
+ - `per_device_train_batch_size`: 256
182
+ - `learning_rate`: 3e-05
183
+ - `max_steps`: 60
184
+ - `lr_scheduler_type`: constant_with_warmup
185
+ - `warmup_ratio`: 0.03
186
+ - `bf16`: True
187
+ - `batch_sampler`: no_duplicates
188
+
189
+ #### All Hyperparameters
190
+ <details><summary>Click to expand</summary>
191
+
192
+ - `overwrite_output_dir`: False
193
+ - `do_predict`: False
194
+ - `eval_strategy`: no
195
+ - `prediction_loss_only`: True
196
+ - `per_device_train_batch_size`: 256
197
+ - `per_device_eval_batch_size`: 8
198
+ - `per_gpu_train_batch_size`: None
199
+ - `per_gpu_eval_batch_size`: None
200
+ - `gradient_accumulation_steps`: 1
201
+ - `eval_accumulation_steps`: None
202
+ - `torch_empty_cache_steps`: None
203
+ - `learning_rate`: 3e-05
204
+ - `weight_decay`: 0.0
205
+ - `adam_beta1`: 0.9
206
+ - `adam_beta2`: 0.999
207
+ - `adam_epsilon`: 1e-08
208
+ - `max_grad_norm`: 1.0
209
+ - `num_train_epochs`: 3.0
210
+ - `max_steps`: 60
211
+ - `lr_scheduler_type`: constant_with_warmup
212
+ - `lr_scheduler_kwargs`: {}
213
+ - `warmup_ratio`: 0.03
214
+ - `warmup_steps`: 0
215
+ - `log_level`: passive
216
+ - `log_level_replica`: warning
217
+ - `log_on_each_node`: True
218
+ - `logging_nan_inf_filter`: True
219
+ - `save_safetensors`: True
220
+ - `save_on_each_node`: False
221
+ - `save_only_model`: False
222
+ - `restore_callback_states_from_checkpoint`: False
223
+ - `no_cuda`: False
224
+ - `use_cpu`: False
225
+ - `use_mps_device`: False
226
+ - `seed`: 42
227
+ - `data_seed`: None
228
+ - `jit_mode_eval`: False
229
+ - `use_ipex`: False
230
+ - `bf16`: True
231
+ - `fp16`: False
232
+ - `fp16_opt_level`: O1
233
+ - `half_precision_backend`: auto
234
+ - `bf16_full_eval`: False
235
+ - `fp16_full_eval`: False
236
+ - `tf32`: None
237
+ - `local_rank`: 0
238
+ - `ddp_backend`: None
239
+ - `tpu_num_cores`: None
240
+ - `tpu_metrics_debug`: False
241
+ - `debug`: []
242
+ - `dataloader_drop_last`: False
243
+ - `dataloader_num_workers`: 0
244
+ - `dataloader_prefetch_factor`: None
245
+ - `past_index`: -1
246
+ - `disable_tqdm`: False
247
+ - `remove_unused_columns`: True
248
+ - `label_names`: None
249
+ - `load_best_model_at_end`: False
250
+ - `ignore_data_skip`: False
251
+ - `fsdp`: []
252
+ - `fsdp_min_num_params`: 0
253
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
254
+ - `fsdp_transformer_layer_cls_to_wrap`: None
255
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
256
+ - `parallelism_config`: None
257
+ - `deepspeed`: None
258
+ - `label_smoothing_factor`: 0.0
259
+ - `optim`: adamw_torch_fused
260
+ - `optim_args`: None
261
+ - `adafactor`: False
262
+ - `group_by_length`: False
263
+ - `length_column_name`: length
264
+ - `ddp_find_unused_parameters`: None
265
+ - `ddp_bucket_cap_mb`: None
266
+ - `ddp_broadcast_buffers`: False
267
+ - `dataloader_pin_memory`: True
268
+ - `dataloader_persistent_workers`: False
269
+ - `skip_memory_metrics`: True
270
+ - `use_legacy_prediction_loop`: False
271
+ - `push_to_hub`: False
272
+ - `resume_from_checkpoint`: None
273
+ - `hub_model_id`: None
274
+ - `hub_strategy`: every_save
275
+ - `hub_private_repo`: None
276
+ - `hub_always_push`: False
277
+ - `hub_revision`: None
278
+ - `gradient_checkpointing`: False
279
+ - `gradient_checkpointing_kwargs`: None
280
+ - `include_inputs_for_metrics`: False
281
+ - `include_for_metrics`: []
282
+ - `eval_do_concat_batches`: True
283
+ - `fp16_backend`: auto
284
+ - `push_to_hub_model_id`: None
285
+ - `push_to_hub_organization`: None
286
+ - `mp_parameters`:
287
+ - `auto_find_batch_size`: False
288
+ - `full_determinism`: False
289
+ - `torchdynamo`: None
290
+ - `ray_scope`: last
291
+ - `ddp_timeout`: 1800
292
+ - `torch_compile`: False
293
+ - `torch_compile_backend`: None
294
+ - `torch_compile_mode`: None
295
+ - `include_tokens_per_second`: False
296
+ - `include_num_input_tokens_seen`: False
297
+ - `neftune_noise_alpha`: None
298
+ - `optim_target_modules`: None
299
+ - `batch_eval_metrics`: False
300
+ - `eval_on_start`: False
301
+ - `use_liger_kernel`: False
302
+ - `liger_kernel_config`: None
303
+ - `eval_use_gather_object`: False
304
+ - `average_tokens_across_devices`: False
305
+ - `prompts`: None
306
+ - `batch_sampler`: no_duplicates
307
+ - `multi_dataset_batch_sampler`: proportional
308
+ - `router_mapping`: {}
309
+ - `learning_rate_mapping`: {}
310
+
311
+ </details>
312
+
313
+ ### Training Logs
314
+ | Epoch | Step | Training Loss |
315
+ |:------:|:----:|:-------------:|
316
+ | 0.0024 | 1 | 2.9285 |
317
+ | 0.0048 | 2 | 2.9415 |
318
+ | 0.0072 | 3 | 2.7433 |
319
+ | 0.0096 | 4 | 2.8367 |
320
+ | 0.0120 | 5 | 2.7583 |
321
+ | 0.0144 | 6 | 2.8774 |
322
+ | 0.0168 | 7 | 2.7791 |
323
+ | 0.0192 | 8 | 2.5914 |
324
+ | 0.0216 | 9 | 2.5369 |
325
+ | 0.0240 | 10 | 2.5583 |
326
+ | 0.0264 | 11 | 2.428 |
327
+ | 0.0288 | 12 | 2.2281 |
328
+ | 0.0312 | 13 | 2.3207 |
329
+ | 0.0336 | 14 | 2.3152 |
330
+ | 0.0360 | 15 | 2.3222 |
331
+ | 0.0384 | 16 | 1.9328 |
332
+ | 0.0408 | 17 | 2.0254 |
333
+ | 0.0432 | 18 | 2.2076 |
334
+ | 0.0456 | 19 | 1.9551 |
335
+ | 0.0480 | 20 | 2.0753 |
336
+ | 0.0504 | 21 | 1.9028 |
337
+ | 0.0528 | 22 | 1.8977 |
338
+ | 0.0552 | 23 | 1.8852 |
339
+ | 0.0576 | 24 | 1.8288 |
340
+ | 0.0600 | 25 | 1.7363 |
341
+ | 0.0624 | 26 | 1.8455 |
342
+ | 0.0647 | 27 | 1.7129 |
343
+ | 0.0671 | 28 | 1.9365 |
344
+ | 0.0695 | 29 | 2.0386 |
345
+ | 0.0719 | 30 | 1.8644 |
346
+ | 0.0743 | 31 | 1.481 |
347
+ | 0.0767 | 32 | 1.8281 |
348
+ | 0.0791 | 33 | 1.5593 |
349
+ | 0.0815 | 34 | 1.7088 |
350
+ | 0.0839 | 35 | 1.7356 |
351
+ | 0.0863 | 36 | 1.6223 |
352
+ | 0.0887 | 37 | 1.6218 |
353
+ | 0.0911 | 38 | 1.4948 |
354
+ | 0.0935 | 39 | 1.6253 |
355
+ | 0.0959 | 40 | 1.553 |
356
+ | 0.0983 | 41 | 1.565 |
357
+ | 0.1007 | 42 | 1.6852 |
358
+ | 0.1031 | 43 | 1.4419 |
359
+ | 0.1055 | 44 | 1.4839 |
360
+ | 0.1079 | 45 | 1.4249 |
361
+ | 0.1103 | 46 | 1.4301 |
362
+ | 0.1127 | 47 | 1.5504 |
363
+ | 0.1151 | 48 | 1.4154 |
364
+ | 0.1175 | 49 | 1.3868 |
365
+ | 0.1199 | 50 | 1.601 |
366
+ | 0.1223 | 51 | 1.468 |
367
+ | 0.1247 | 52 | 1.4715 |
368
+ | 0.1271 | 53 | 1.6019 |
369
+ | 0.1295 | 54 | 1.4216 |
370
+ | 0.1319 | 55 | 1.3206 |
371
+ | 0.1343 | 56 | 1.4081 |
372
+ | 0.1367 | 57 | 1.2969 |
373
+ | 0.1391 | 58 | 1.5933 |
374
+ | 0.1415 | 59 | 1.4106 |
375
+ | 0.1439 | 60 | 1.7639 |
376
+
377
+
378
+ ### Framework Versions
379
+ - Python: 3.12.12
380
+ - Sentence Transformers: 5.2.0
381
+ - Transformers: 4.56.2
382
+ - PyTorch: 2.9.0+cu126
383
+ - Accelerate: 1.12.0
384
+ - Datasets: 4.3.0
385
+ - Tokenizers: 0.22.2
386
+
387
+ ## Citation
388
+
389
+ ### BibTeX
390
+
391
+ #### Sentence Transformers
392
+ ```bibtex
393
+ @inproceedings{reimers-2019-sentence-bert,
394
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
395
+ author = "Reimers, Nils and Gurevych, Iryna",
396
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
397
+ month = "11",
398
+ year = "2019",
399
+ publisher = "Association for Computational Linguistics",
400
+ url = "https://arxiv.org/abs/1908.10084",
401
+ }
402
+ ```
403
+
404
+ #### MultipleNegativesRankingLoss
405
+ ```bibtex
406
+ @misc{henderson2017efficient,
407
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
408
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
409
+ year={2017},
410
+ eprint={1705.00652},
411
+ archivePrefix={arXiv},
412
+ primaryClass={cs.CL}
413
+ }
414
+ ```
415
+
416
+ <!--
417
+ ## Glossary
418
+
419
+ *Clearly define terms in order to be accessible across audiences.*
420
+ -->
421
+
422
+ <!--
423
+ ## Model Card Authors
424
+
425
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
426
+ -->
427
+
428
+ <!--
429
+ ## Model Card Contact
430
+
431
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
432
+ -->
adapter_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": {
6
+ "base_model_class": "Qwen3Model",
7
+ "parent_library": "transformers.models.qwen3.modeling_qwen3",
8
+ "unsloth_fixed": true
9
+ },
10
+ "base_model_name_or_path": "Qwen/Qwen3-Embedding-0.6B",
11
+ "bias": "none",
12
+ "corda_config": null,
13
+ "ensure_weight_tying": false,
14
+ "eva_config": null,
15
+ "exclude_modules": null,
16
+ "fan_in_fan_out": false,
17
+ "inference_mode": true,
18
+ "init_lora_weights": true,
19
+ "layer_replication": null,
20
+ "layers_pattern": null,
21
+ "layers_to_transform": null,
22
+ "loftq_config": {},
23
+ "lora_alpha": 32,
24
+ "lora_bias": false,
25
+ "lora_dropout": 0,
26
+ "megatron_config": null,
27
+ "megatron_core": "megatron.core",
28
+ "modules_to_save": null,
29
+ "peft_type": "LORA",
30
+ "peft_version": "0.18.1",
31
+ "qalora_group_size": 16,
32
+ "r": 32,
33
+ "rank_pattern": {},
34
+ "revision": null,
35
+ "target_modules": [
36
+ "k_proj",
37
+ "up_proj",
38
+ "down_proj",
39
+ "q_proj",
40
+ "o_proj",
41
+ "gate_proj",
42
+ "v_proj"
43
+ ],
44
+ "target_parameters": null,
45
+ "task_type": "FEATURE_EXTRACTION",
46
+ "trainable_token_indices": null,
47
+ "use_dora": false,
48
+ "use_qalora": false,
49
+ "use_rslora": false
50
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17054f156119b40a788b172445265ddec4408ff072e344497410caa78d9409b0
3
+ size 80790104
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
27
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
28
+ {%- elif message.role == "assistant" %}
29
+ {%- set content = message.content %}
30
+ {%- set reasoning_content = '' %}
31
+ {%- if message.reasoning_content is defined and message.reasoning_content is not none %}
32
+ {%- set reasoning_content = message.reasoning_content %}
33
+ {%- else %}
34
+ {%- if '</think>' in message.content %}
35
+ {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
36
+ {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
37
+ {%- endif %}
38
+ {%- endif %}
39
+ {%- if loop.index0 > ns.last_query_index %}
40
+ {%- if loop.last or (not loop.last and reasoning_content) %}
41
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
42
+ {%- else %}
43
+ {{- '<|im_start|>' + message.role + '\n' + content }}
44
+ {%- endif %}
45
+ {%- else %}
46
+ {{- '<|im_start|>' + message.role + '\n' + content }}
47
+ {%- endif %}
48
+ {%- if message.tool_calls %}
49
+ {%- for tool_call in message.tool_calls %}
50
+ {%- if (loop.first and content) or (not loop.first) %}
51
+ {{- '\n' }}
52
+ {%- endif %}
53
+ {%- if tool_call.function %}
54
+ {%- set tool_call = tool_call.function %}
55
+ {%- endif %}
56
+ {{- '<tool_call>\n{"name": "' }}
57
+ {{- tool_call.name }}
58
+ {{- '", "arguments": ' }}
59
+ {%- if tool_call.arguments is string %}
60
+ {{- tool_call.arguments }}
61
+ {%- else %}
62
+ {{- tool_call.arguments | tojson }}
63
+ {%- endif %}
64
+ {{- '}\n</tool_call>' }}
65
+ {%- endfor %}
66
+ {%- endif %}
67
+ {{- '<|im_end|>\n' }}
68
+ {%- elif message.role == "tool" %}
69
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
70
+ {{- '<|im_start|>user' }}
71
+ {%- endif %}
72
+ {{- '\n<tool_response>\n' }}
73
+ {{- message.content }}
74
+ {{- '\n</tool_response>' }}
75
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
76
+ {{- '<|im_end|>\n' }}
77
+ {%- endif %}
78
+ {%- endif %}
79
+ {%- endfor %}
80
+ {%- if add_generation_prompt %}
81
+ {{- '<|im_start|>assistant\n' }}
82
+ {%- if enable_thinking is defined and enable_thinking is false %}
83
+ {{- '<think>\n\n</think>\n\n' }}
84
+ {%- endif %}
85
+ {%- endif %}
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.2.0",
5
+ "transformers": "4.56.2",
6
+ "pytorch": "2.9.0+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a5b90ffcbd8fe896c9ee9fe56c5dd84116f876ad5cdbe0d1424fbe150f41ca6
3
+ size 11423970
tokenizer_config.json ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 131072,
235
+ "pad_token": "<|endoftext|>",
236
+ "padding_side": "left",
237
+ "split_special_tokens": false,
238
+ "tokenizer_class": "Qwen2Tokenizer",
239
+ "unk_token": null
240
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff