SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Art is sometimes divided into two kinds, high art and popular art. High art attracts a much smaller population than popular art,but the number is large and growing. People who enjoy high art go to the opera and symphony concerts ; they read serious books and go to serious plays ; they keep up with art exhibitions. Popular art is mainly a kind of amusement. Some TV programs are meant to be watched today and forgotten tomorrow. Many popular songs are hits for a few weeks ; then they disappear. Other songs remain popular for such a long time that they become _ . The line between high and popular art is not always clear,however. Many people believe that rock music, for example, is a real art form. Many films are also taken seriously ,while others disappear as nothing more than amusement. Many more people like popular art than high art because   _  . A. popular art is better than high art. B. high art is not a real art form. C. popular art will be loved for a longer time than high art. D. popular art is a kind of amusement.',
    '**The arts**\n\nThe arts:\nThe arts are a very wide range of human practices of creative expression, storytelling and cultural participation. They encompass multiple diverse and plural modes of thinking, doing and being, in an extremely broad range of media. Both highly dynamic and a characteristically constant feature of human life, they have developed into innovative, stylized and sometimes intricate forms. This is often achieved through sustained and deliberate study, training and/or theorizing within a particular tradition, across generations and even between civilizations. The arts are a vehicle through which human beings cultivate distinct social, cultural and individual identities, while transmitting values, impressions, judgments, ideas, visions, spiritual meanings, patterns of life and experiences across time and space.',
    '**Genre painting**\n\nGenre painting:\nGenre painting (or petit genre), a form of genre art, depicts aspects of everyday life by portraying ordinary people engaged in common activities. One common definition of a genre scene is that it shows figures to whom no identity can be attached either individually or collectively, thus distinguishing it from history paintings (also called grand genre) and portraits. A work would often be considered as a genre work even if it could be shown that the artist had used a known person—a member of his family, say—as a model. In this case it would depend on whether the work was likely to have been intended by the artist to be perceived as a portrait—sometimes a subjective question. The depictions can be realistic, imagined, or romanticized by the artist. Because of their familiar and frequently sentimental subject matter, genre paintings have often proven popular with the bourgeoisie, or middle class.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 117,937 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 25 tokens
    • mean: 121.28 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 141.37 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 140.67 tokens
    • max: 256 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    What organ do we use to hear sound? A. eye. B. the ear. C. amplifier. D. antennae. The organ that we use to hear sound is the ear. Almost all the structures in the ear are needed for this purpose. Together, they gather and amplify sound waves and change their energy to electrical signals. The electrical signals travel to the brain, which interprets them as sound. Mouth organ

    Mouth organ:
    A mouth organ is any free reed aerophone with one or more air chambers fitted with a free reed.
    Though it spans many traditions, it is played universally the same way by the musician placing their lips over a chamber or holes in the instrument, and blowing or sucking air to create a sound. Many of the chambers can be played together or each individually.
    Preveebral space thickness in adult at C6-C7 level is A. 7mm. B. 15mm. C. 22mm. D. 30mm. Infraorbital margin

    Infraorbital margin:
    The infraorbital margin is the lower margin of the eye socket.
    Cervical enlargement

    Cervical enlargement:
    The cervical enlargement corresponds with the attachments of the large nerves which supply the upper limbs.
    Located just above the brachial plexus, it extends from about the fifth cervical to the first thoracic vertebra, its maximum circumference (about 38 mm.) being on a level with the attachment of the sixth pair of cervical nerves.
    The reason behind the enlargement of the cervical region is because of the increased neural input and output to the upper limbs.
    An analogous region in the lower limbs occurs at the lumbar enlargement.
    Grey Turner's sign (Flank discolouration) is seen in A. Acute pyelonephritis. B. Acute cholecystitis. C. Acute pancreatitis. D. Acute peritonitis. Bancroft's sign

    Bancroft's sign:
    Bancroft's sign, also known as Moses' sign, is a clinical sign found in patients with deep vein thrombosis of the lower leg involving the posterior tibial veins. The sign is positive if pain is elicited when the calf muscle is compressed forwards against the tibia, but not when the calf muscle is compressed from side to side. Like other clinical signs for deep vein thrombosis, such as Homans sign and Lowenberg's sign, this sign is neither sensitive nor specific for the presence of thrombosis.
    Dance's sign

    Dance's sign:
    Dance's sign is an eponymous medical sign consisting of an investigation of the right lower quadrant of the abdomen for retraction, which can be an indication of intussusception, i.e. those with intussusception may have retraction of the right iliac fossa.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0678 500 0.6209
0.1356 1000 0.6208
0.2035 1500 0.5913
0.2713 2000 0.5989
0.3391 2500 0.5879
0.4069 3000 0.584
0.4748 3500 0.5627
0.5426 4000 0.5564
0.6104 4500 0.5511
0.6782 5000 0.5511
0.7461 5500 0.5345
0.8139 6000 0.5334
0.8817 6500 0.5377
0.9495 7000 0.5227

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.4
  • PyTorch: 2.7.1
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
-
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gswrtz/finetuned-neg-rag-embedder

Finetuned
(752)
this model

Papers for Gswrtz/finetuned-neg-rag-embedder