SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'If the difference between the length and breadth of a rectangle is 23 m and its perimeter is 206 m, what is its area? A. 2510. B. 2530. C. 2515. D. 2520.',
    "A circle with radius $3$ has a sector with a $345^\\circ$ central angle. What is the area of the sector? ${9\\pi}$ $\\color{#9D38BD}{345^\\circ}$ ${\\dfrac{69}{8}\\pi}$ ${3}$\n\nHints:\nFirst, calculate the area of the whole circle. Then the area of the sector is some fraction of the whole circle's area. $A_c = \\pi r^2$ $A_c = \\pi (3)^2$ $A_c = 9\\pi$ The ratio between the sector's central angle $\\theta$ and $360^\\circ$ is equal to the ratio between the sector's area, $A_s$ , and the whole circle's area, $A_c$ $\\dfrac{\\theta}{360^\\circ} = \\dfrac{A_s}{A_c}$ $\\dfrac{345^\\circ}{360^\\circ} = \\dfrac{A_s}{9\\pi}$ $\\dfrac{23}{24} = \\dfrac{A_s}{9\\pi}$ $\\dfrac{23}{24} \\times 9\\pi = A_s$ $\\dfrac{69}{8}\\pi = A_s$",
    'The area of a rectangular surface is calculated as its length multiplied by its width.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 115,928 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 25 tokens
    • mean: 172.44 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 128.32 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 116.61 tokens
    • max: 256 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    Back in time, dog sledding was the only method of transportation in frozen parts of the world. You, too, can participate in this time-honored tradition after looking into dog sledding adventures and tours. Alaska Dog Sledding It offers two dog sledding tours in Alaska, the Golsovia Dog Trip and the Iditarod Dog Trip. On both trips, you will learn how to drive your own dog team. The Golsovia Dog Trip lasts six days. The Iditarod Dog Trip is held just before the internationally famous Iditarod race so that participants will be able to attend the Iditarod events. For more information, please see: Alaska Dog Sledding Greenland Expedition Specialists Travel through the shining ice and snow on the land of East Greenland via dog sledding. During this dog sledding vacation, you will be camping during the cold of winter and will take part in caring for the dogs and the camps. They also organize and guide kiting trips, sea kayaking trips and mountaineering vacations. For more informa... Dog camp

    Dog camp:
    Dog camp is a form of vacation for owners accompanied by their dogs with dog-centric activities ranging from casual recreational playtime to serious obedience or sport training. In many dog camps dogs can play and socialize throughout the day while supervised by their owners. Some of the activities at a dog camp might include running, fetching balls or frisbees, chasing other dogs, tug of war, socializing amongst playmates, and lessons.
    Dog camp Dog camp: Dog camp is a form of vacation for owners accompanied by their dogs with dog-centric activities ranging from casual recreational playtime to serious obedience or sport training. In many dog camps dogs can play and socialize throughout the day while supervised by their owners. Some of
    My brother is 3 years elder to me.My father was 28 years of age when my sister was born while my mother was 26 years of age when I was born.If my sister was 4 years of age when my brother was born,then,what was the age of my father and mother respectively when my brother was born ? A. 32 yrs,23 yrs. B. 35 yrs,29 yrs. C. 35 yrs,33 yrs. D. None of these. Abby is $3$ years old. Her brother Ben is $4$ years older than she is. How old is Ben?

    Hints:
    To find how old ${\text{Ben}}$ is, we start with how old ${\text{Abby}}$ is, and add ${4\text{ years}}$. $?$ $3$ $4$ Ben's age Abby's age 4 more years ${3} + {4} = {\Box}$ Let's add to find how old ${\text{Ben}}$ is. $$ ${+}$ $$ ${=}$ $$ $$ ${3} + {4} = {7}$ ${\text{Ben}}$ is ${7}$ years old.
    Ishaan is $3$ times as old as Christopher and is also $14$ years older than Christopher. How old is Ishaan?

    Hints:
    We can use the given information to write down two equations that describe the ages of Ishaan and Christopher. Let Ishaan's current age be $i$ and Christopher's current age be $c$. ${i = 3c}$ ${i = c + 14}$ Now we have two independent equations, and we can solve for our two unknowns. One way to solve for $i$ is to solve the second equation for $c$ and substitute that value into the first equation. Solving our second equation for $c$, we get: ${c = i - 14}$. Substituting this into our first equation, we get the equation: ${i = 3}{(i - 14)}$ which combines the information about $i$ from both of our original equations. Simplifying the right side of this equation, we get: $i = 3i - 42$. Solving for $i$, we get: $2 i = 42$. $i = 21$.
    Blood specimen for neonatal thyroid screening is obtained at: A. Cord blood. B. 24 hours after bih. C. 48 hours after bih. D. 72 hours after bih. TRH stimulation test

    TRH stimulation test:
    Prior to the availability of sensitive TSH assays, thyrotropin releasing hormone or TRH stimulation tests were relied upon for confirming and assessing the degree of suppression in suspected hyperthyroidism. Typically, this stimulation test involves determining basal TSH levels and levels 15 to 30 minutes after an intravenous bolus of TRH. Normally, TSH would rise into the concentration range measurable with less sensitive TSH assays.
    Blood gas test

    Blood gas test:
    A blood gas test or blood gas analysis tests blood to measure blood gas tension values, it also measures blood pH, and the level and base excess of bicarbonate. The source of the blood is reflected in the name of each test; arterial blood gases come from arteries, venous blood gases come from veins and capillary blood gases come from capillaries. The blood gas tension levels of partial pressures can be used as indicators of ventilation, respiration and oxygenation. Analysis of paired arterial and venous specimens can give insights into the aetiology of acidosis in the newborn.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0690 500 0.6132
0.1380 1000 0.6121
0.2070 1500 0.5883
0.2760 2000 0.5838
0.3450 2500 0.5689
0.4140 3000 0.55
0.4830 3500 0.5422
0.5520 4000 0.5257
0.6210 4500 0.5099
0.6900 5000 0.5008
0.7590 5500 0.5066
0.8280 6000 0.4941
0.8970 6500 0.4881
0.9661 7000 0.4898

Framework Versions

  • Python: 3.12.7
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.7.0
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gswrtz/finetuned-cos-rag-embedder

Finetuned
(752)
this model

Papers for Gswrtz/finetuned-cos-rag-embedder