SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. The base model was fine tuned on a 800k randomly sampled subset of the passage ranking task of msmarco train triples small dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("manupande21/all-MiniLM-L6-v2-finetuned-triplets")
# Run inference
sentences = [
    'how old to serve alcohol il',
    'How old do you have to be to serve alcohol in Illinois? A: As of January 2015, a person must be at least 18 years old to serve alcohol in Illinois, according to the Illinois Liquor Control Commission. However, the Illinois Liquor Control Act allows jurisdictional control on the issue, and some localities in Illinois require ages older than 18.',
    '| County of Peoria, IL and Merrick & Company | County of Peoria, IL and the Surdex Corporation | County of Peoria, IL and the Sanborn Map Company, Inc. | County of Peoria, IL and Illinois Department of Transportation | County of Peoria, IL |.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9667

Training Details

Training Dataset

Unnamed Dataset

  • Size: 800,000 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 4 tokens
    • mean: 9.01 tokens
    • max: 38 tokens
    • min: 17 tokens
    • mean: 80.57 tokens
    • max: 206 tokens
    • min: 23 tokens
    • mean: 77.25 tokens
    • max: 234 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    what grows in urine culture A urine culture is a test to find and identify germs (usually bacteria) that may be causing a urinary tract infection (UTI) . Urine in the bladder normally is sterile—it does not contain any bacteria or other organisms (such as fungi). But bacteria can enter the urethra and cause an infection. A urine sample is kept under conditions that allow bacteria and other organisms to grow. If few or no organisms grow, the test is negative. If organisms grow in numbers large enough to indicate an infection, the culture is positive. Given these findings, you recommend the family take her to the emergency department to obtain blood work and a urine culture. She has an elevated WBC=19.6 and her BUN/Cr=9/0.5. You speak to the ED staff and recommend she be admitted and placed on broad spectrum IV antibiotics and IV fluids.
    what a flashing yellow light on a intersection means Something requires extra attention. At an intersection, a flashing circular yellow means to take extra caution, and watch for traffic on the cross road (which has a flashing red light). A flashing red light is the same as a stop sign. White cake has only egg whites while yellow cake contains whole eggs, and it’s the yolks that give the cake its yellow tint. Beyond that, other differences include: * White cake is made with cake flour, whereas yellow cake uses all-purpose flour.* White cake has a very light, almost sponge cake-like texture, whereas yellow cake has a moister and denser texture.eyond that, other differences include: * White cake is made with cake flour, whereas yellow cake uses all-purpose flour. * White cake has a very light, almost sponge cake-like texture, whereas yellow cake has a moister and denser texture.
    what airport to fly into for washington dc Or you can rent a car. The closest in of Washington, D.C.’s airports is Ronald Reagan Washington National Airport. That’s your likely destination if you fly U.S. Airways. The U.S. Airways Shuttle and the Delta Shuttle also land there. Reviewed 3 days ago NEW. Capitol Hill Hotel, Washington D. C. We are from Chicago and arrived at the Capitol Hill Hotel after spending time with my son at the NIH hospital in Washington DC. It was a rather stressful time so we were looking to relax and enjoy the rich history of Washington DC.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 1024
  • per_device_eval_batch_size: 1024
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 1024
  • per_device_eval_batch_size: 1024
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss test-eval_cosine_accuracy
0.6394 500 1.5694 -
1.0 782 - 0.9632
1.2788 1000 0.9311 -
1.9182 1500 0.852 -
2.0 1564 - 0.9665
2.5575 2000 0.7983 -
3.0 2346 - 0.9667

Framework Versions

  • Python: 3.11.5
  • Sentence Transformers: 4.1.0
  • Transformers: 4.42.4
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
1
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for manupande21/all-MiniLM-L6-v2-finetuned-triplets

Finetuned
(701)
this model

Papers for manupande21/all-MiniLM-L6-v2-finetuned-triplets

Evaluation results