SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m-v2.0. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m-v2.0
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'GteModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("BjarneNPO-26_08_2025_19_57_49")
# Run inference
queries = [
    "Userin hinterlegt Email-Adresse im Benutzerkonto und speichert. Aber die Adresse wird trotz Best\u00e4tigung nicht gespeichert. \r\nEMA ist notwendig f\u00fcr 2FA\r\n\r\n Roesler =  jil.roesler@cse.ruhr",
]
documents = [
    'N',
    'Unter dem Namen der Dame gibt es nur einen Login. Vielleicht schaut sie mit dem Login einer anderen Kollegin auf die zweite Einrichtung? Oder sie hat einen Login als Träger? Dies klärt sie mit der Einrichtung ab.',
    'Userin hat die Rolle "Mitarbeiter".',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.2199, 0.2094, 0.1052]])

Evaluation

Metrics

Information Retrieval

  • Dataset: Snowflake/snowflake-arctic-embed-m-v2.0
  • Evaluated with scripts.InformationRetrievalEvaluatorCustom.InformationRetrievalEvaluatorCustom
Metric Value
cosine_accuracy@1 0.3
cosine_accuracy@3 0.4
cosine_accuracy@5 0.4
cosine_accuracy@10 0.9
cosine_precision@1 0.3
cosine_precision@3 0.1667
cosine_precision@5 0.1
cosine_precision@10 0.12
cosine_recall@1 0.0333
cosine_recall@3 0.0556
cosine_recall@5 0.0556
cosine_recall@10 0.1333
cosine_ndcg@10 0.149
cosine_mrr@10 0.4071
cosine_map@100 0.0636

Training Details

Training Dataset

Unnamed Dataset

  • Size: 86,218 training samples
  • Columns: query and answer
  • Approximate statistics based on the first 1000 samples:
    query answer
    type string string
    details
    • min: 5 tokens
    • mean: 80.67 tokens
    • max: 5231 tokens
    • min: 3 tokens
    • mean: 25.16 tokens
    • max: 238 tokens
  • Samples:
    query answer
    Nun ist die Monatsmeldung erfolgt, aber rote Ausrufezeichen tauchen auf. Userin an das JA verwiesen, diese müssten ihr die Schloss-Monate zur Überarbeitung im Kibiz.web zurückgeben. Userin dazu empfohlen, die Kinder die nicht in kitaplus sind, aber in Kibiz.web - im KiBiz.web zu entfernen, wenn diese nicht vorhanden sind.
    Die Feiertage in den Stammdaten stimmen nicht. Es besteht bereits ein Ticket dafür.
    Abrechnung kann nicht final freigegeben werden, es wird aber keiner Fehlermeldung angeziegt im Hintergrund ist eine Fehlermeldung zu sehen. An Entwickler weitergeleitet.

    Korrektur vorgenommen.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Snowflake/snowflake-arctic-embed-m-v2.0_cosine_ndcg@10
0.1187 10 2.81 -
0.2374 20 2.3706 -
0.3561 30 2.1261 -
0.4748 40 1.9089 -
0.5935 50 1.8251 -
0.7122 60 1.7666 -
0.8309 70 1.7305 -
0.9496 80 1.2862 -
1.0 85 - 0.149
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.55.2
  • PyTorch: 2.8.0+cu129
  • Accelerate: 1.10.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BjarneNPO/BjarneNPO-26_08_2025_19_57_49

Finetuned
(28)
this model

Evaluation results

  • Cosine Accuracy@1 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.300
  • Cosine Accuracy@3 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.400
  • Cosine Accuracy@5 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.400
  • Cosine Accuracy@10 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.900
  • Cosine Precision@1 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.300
  • Cosine Precision@3 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.167
  • Cosine Precision@5 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.100
  • Cosine Precision@10 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.120
  • Cosine Recall@1 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.033
  • Cosine Recall@3 on Snowflake/snowflake arctic embed m v2.0
    self-reported
    0.056