BjarneNPO's picture
Add new SentenceTransformer model
9125365 verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:19964
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m-v2.0
widget:
- source_sentence: 'Kollegin hat Probleme mit dem Login zu '
sentences:
- Alle genannten Kinder gab es in kitaplus. Bei einem musste nur eine neue BI angelegt
werden, bei den anderen muss der Vertrag in einer anderen Kita rückgängig gemacht
werden, damit es in kitaplus in dieser Einrichtung aus der Liste der Absagen genommen
werden kann.
- Der Bereich ist aktuell noch nicht sichtbar.
- muss mit dem Rentamt geklärt werden
- source_sentence: Benutzer möchte einen Kollegen nur für die Dokumentenbibliothek
anlegen.
sentences:
- Rücksprache mit Entwickler.
- Sie muss den Regler auf Anzahl stellen
- Zusammen die Rolle gewählt und dort dann in den individuellen Rechten alles auf
lesend bzw. ausblenden gestellt, außer die Bibliothek.
- source_sentence: Ist es richtig so, dass Mitarbeiter, wenn sie nach einer gewissen
Zeit wieder in die Einrichtung kommen, erneut angelegt werden müssen?
sentences:
- Userin an den Träger verwiesen, dieser kann bei ihr ein neues Passwort setzen.
- Ja, das ist korrekt so.
- Userin muss erst rechts über das 3-Punkte-menü die "Anmeldedaten zusammenführen".
Danach muss man in den angelegten BI die Gruppenform des Anmeldeportals angeben.
- source_sentence: Userin kann die Öffnungszeiten der Einrichtung nicht bearbeiten.
sentences:
- informiert, dass es keinen Testzugang gibt, aber Handbücher und Hilfen in zur
Verfügung stehen, wenn die Schnittstelle eingerichtet wurde.
- Bereits bekannt, die Kollegen sind schon dabei den Fehler zu beheben.
- Userin darf dies mit der Rolle nicht.
- source_sentence: fragt wie der Stand zu dem aktuellen Problem ist
sentences:
- Userin muss sich an die Bistums IT wenden.
- In Klärung mit der Kollegin - Das Problem liegt leider an deren Betreiber. Die
sind aber informiert und arbeiten bereits daran
- findet diese in der Übersicht der Gruppen.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Snowflake/snowflake arctic embed m v2.0
type: Snowflake/snowflake-arctic-embed-m-v2.0
metrics:
- type: cosine_accuracy@1
value: 0.19708029197080293
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7226277372262774
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8029197080291971
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8759124087591241
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.19708029197080293
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.44525547445255476
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.46277372262773725
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.43576642335766425
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.008762531776700945
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09805489105617915
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1603290464604333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.23250747987759582
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4532269034566889
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.47734040088054697
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2936078777768552
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) <!-- at revision 95c2741480856aa9666782eb4afe11959938017f -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- train
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'GteModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("BjarneNPO/finetune_21_08_2025_18_35_25")
# Run inference
queries = [
"fragt wie der Stand zu dem aktuellen Problem ist",
]
documents = [
'In Klärung mit der Kollegin - Das Problem liegt leider an deren Betreiber. Die sind aber informiert und arbeiten bereits daran',
'findet diese in der Übersicht der Gruppen.',
'Userin muss sich an die Bistums IT wenden.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.2744, 0.0387, 0.0701]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `Snowflake/snowflake-arctic-embed-m-v2.0`
* Evaluated with <code>scripts.InformationRetrievalEvaluatorCustom.InformationRetrievalEvaluatorCustom</code> with these parameters:
```json
{
"query_prompt_name": "query",
"corpus_prompt_name": "query"
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1971 |
| cosine_accuracy@3 | 0.7226 |
| cosine_accuracy@5 | 0.8029 |
| cosine_accuracy@10 | 0.8759 |
| cosine_precision@1 | 0.1971 |
| cosine_precision@3 | 0.4453 |
| cosine_precision@5 | 0.4628 |
| cosine_precision@10 | 0.4358 |
| cosine_recall@1 | 0.0088 |
| cosine_recall@3 | 0.0981 |
| cosine_recall@5 | 0.1603 |
| cosine_recall@10 | 0.2325 |
| **cosine_ndcg@10** | **0.4532** |
| cosine_mrr@10 | 0.4773 |
| cosine_map@100 | 0.2936 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### train
* Dataset: train
* Size: 19,964 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 27.77 tokens</li><li>max: 615 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 22.87 tokens</li><li>max: 151 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------|
| <code>Wie kann man die Jahresurlaubsübersicht exportieren?</code> | <code>über das 3 Punkte Menü rechts oben. Mitarbeiter auswählen und exportieren</code> |
| <code>1. Vertragsabschlüsse werden nicht übertragen <br>2. Kinder kommen nicht von nach <br>3. Absage kann bei Portalstatus nicht erstellt werden.</code> | <code>Ticket <br>Userin gebeten sich an den Support zu wenden, da der Fehler liegt.</code> |
| <code>Wird im Anmeldeportal nicht gefunden.</code> | <code>Die Schnittstelle war noch nicht aktiviert und Profil ebenfalls nicht.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Snowflake/snowflake-arctic-embed-m-v2.0_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:------------------------------------------------------:|
| 0.1282 | 10 | 3.4817 | - |
| 0.2564 | 20 | 3.3293 | - |
| 0.3846 | 30 | 3.2454 | - |
| 0.5128 | 40 | 2.9853 | - |
| 0.6410 | 50 | 2.8363 | - |
| 0.7692 | 60 | 2.6833 | - |
| 0.8974 | 70 | 2.5117 | - |
| 1.0 | 78 | - | 0.5070 |
| 1.0256 | 80 | 2.297 | - |
| 1.1538 | 90 | 2.2586 | - |
| 1.2821 | 100 | 2.1379 | - |
| 1.4103 | 110 | 2.1199 | - |
| 1.5385 | 120 | 2.0054 | - |
| 1.6667 | 130 | 1.9546 | - |
| 1.7949 | 140 | 1.8525 | - |
| 1.9231 | 150 | 1.8471 | - |
| 2.0 | 156 | - | 0.4817 |
| 2.0513 | 160 | 1.6686 | - |
| 2.1795 | 170 | 1.7224 | - |
| 2.3077 | 180 | 1.7122 | - |
| 2.4359 | 190 | 1.6487 | - |
| 2.5641 | 200 | 1.631 | - |
| 2.6923 | 210 | 1.5296 | - |
| 2.8205 | 220 | 1.5704 | - |
| 2.9487 | 230 | 1.4634 | - |
| **3.0** | **234** | **-** | **0.4692** |
| 3.0769 | 240 | 1.3748 | - |
| 3.2051 | 250 | 1.4602 | - |
| 3.3333 | 260 | 1.4275 | - |
| 3.4615 | 270 | 1.4183 | - |
| 3.5897 | 280 | 1.3431 | - |
| 3.7179 | 290 | 1.3013 | - |
| 3.8462 | 300 | 1.3206 | - |
| 3.9744 | 310 | 1.2743 | - |
| 4.0 | 312 | - | 0.4699 |
| 4.1026 | 320 | 1.1575 | - |
| 4.2308 | 330 | 1.2629 | - |
| 4.3590 | 340 | 1.2729 | - |
| 4.4872 | 350 | 1.1957 | - |
| 4.6154 | 360 | 1.1674 | - |
| 4.7436 | 370 | 1.1349 | - |
| 4.8718 | 380 | 1.166 | - |
| 5.0 | 390 | 1.0891 | 0.4707 |
| 5.1282 | 400 | 1.0469 | - |
| 5.2564 | 410 | 1.124 | - |
| 5.3846 | 420 | 1.1325 | - |
| 5.5128 | 430 | 1.0691 | - |
| 5.6410 | 440 | 1.0255 | - |
| 5.7692 | 450 | 1.0164 | - |
| 5.8974 | 460 | 1.0451 | - |
| 6.0 | 468 | - | 0.4578 |
| 6.0256 | 470 | 0.9404 | - |
| 6.1538 | 480 | 1.0043 | - |
| 6.2821 | 490 | 0.9964 | - |
| 6.4103 | 500 | 1.013 | - |
| 6.5385 | 510 | 0.9772 | - |
| 6.6667 | 520 | 0.9544 | - |
| 6.7949 | 530 | 0.9659 | - |
| 6.9231 | 540 | 0.9629 | - |
| 7.0 | 546 | - | 0.4576 |
| 7.0513 | 550 | 0.8522 | - |
| 7.1795 | 560 | 0.9288 | - |
| 7.3077 | 570 | 0.9705 | - |
| 7.4359 | 580 | 0.9301 | - |
| 7.5641 | 590 | 0.9388 | - |
| 7.6923 | 600 | 0.8569 | - |
| 7.8205 | 610 | 0.9414 | - |
| 7.9487 | 620 | 0.8796 | - |
| 8.0 | 624 | - | 0.4542 |
| 8.0769 | 630 | 0.8504 | - |
| 8.2051 | 640 | 0.9054 | - |
| 8.3333 | 650 | 0.9035 | - |
| 8.4615 | 660 | 0.9167 | - |
| 8.5897 | 670 | 0.8546 | - |
| 8.7179 | 680 | 0.8508 | - |
| 8.8462 | 690 | 0.8945 | - |
| 8.9744 | 700 | 0.8676 | - |
| 9.0 | 702 | - | 0.4526 |
| 9.1026 | 710 | 0.7934 | - |
| 9.2308 | 720 | 0.889 | - |
| 9.3590 | 730 | 0.9205 | - |
| 9.4872 | 740 | 0.8947 | - |
| 9.6154 | 750 | 0.8679 | - |
| 9.7436 | 760 | 0.8545 | - |
| 9.8718 | 770 | 0.8878 | - |
| 10.0 | 780 | 0.8483 | 0.4532 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu129
- Accelerate: 1.10.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->