SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'what types of rapid testing for Covid-19 have been developed?',
    'on the assessment of more reliable covid-19 infected number: the italian case. [SEP] covid-19 (sars-cov-2) is the most recent pandemic disease the world is currently managing. patients affected by covid-19 are identified employing medical swabs applied mainly to (i) citizens with covid-19 symptoms such as flu or high temperature, or (ii) citizens that had contacts with covid-19 patients.',
    'lack of antiviral activity of darunavir against sars-cov-2 [SEP] given the high need and the absence of specific antivirals for treatment of covid-19 (the disease caused by severe acute respiratory syndrome-associated coronavirus-2 sars-cov-2), human immunodeficiency virus (hiv) protease inhibitors are being considered as therapeutic alternatives. overall, the data do not support the use of drv for treatment of covid-19.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6
cosine_accuracy@3 0.8
cosine_accuracy@5 0.9333
cosine_accuracy@10 0.9333
cosine_precision@1 0.6
cosine_precision@3 0.5778
cosine_precision@5 0.5733
cosine_precision@10 0.4867
cosine_recall@1 0.0037
cosine_recall@3 0.0114
cosine_recall@5 0.02
cosine_recall@10 0.0332
cosine_ndcg@10 0.5159
cosine_mrr@10 0.7156
cosine_map@100 0.1819

Training Details

Training Dataset

Unnamed Dataset

  • Size: 10,836 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 5 tokens
    • mean: 18.36 tokens
    • max: 50 tokens
    • min: 3 tokens
    • mean: 87.23 tokens
    • max: 219 tokens
    • min: 3 tokens
    • mean: 81.52 tokens
    • max: 252 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    coronavirus origin the origin, transmission and clinical therapies on coronavirus disease 2019 (covid-19) outbreak an update on the status [SEP] an acute respiratory disease, caused by a novel coronavirus (sars-cov-2, previously known as 2019-ncov), the coronavirus disease 2019 (covid-19) has spread throughout china and received worldwide attention. the emergence of sars-cov-2, since the severe acute respiratory syndrome coronavirus (sars-cov) in 2002 and middle east respiratory syndrome coronavirus (mers-cov) in 2012, marked the third introduction of a highly pathogenic and large-scale epidemic coronavirus into the human population in the twenty-first century. challenges in developing methods for quantifying the effects of weather and climate on water-associated diseases: a systematic review [SEP] infectious diseases attributable to unsafe water supply, sanitation and hygiene (e.g. cholera, leptospirosis, giardiasis) remain an important cause of morbidity and mortality, especially in low-income countries. furthermore, the methods often did not distinguish among the multiple sources of time-lags (e.g. patient physiology, reporting bias, healthcare access) between environmental drivers/exposures and disease detection.
    Seeking information on best practices for activities and duration of quarantine for those exposed and/ infected to COVID-19 virus. recommendation to optimize safety of elective surgical care while limiting the spread of covid-19: primum non nocere [SEP] covid-19 has drastically altered our lives in an unprecedented manner, shuttering industries, and leaving most of the country in isolation as we adapt to the evolving crisis. the optimal solution of how to effectively balance the resumption of standard surgical care while doing everything possible to limit the spread of covid-19 is undetermined, and could include strategies such as social distancing, screening forms and tests including temperature screening, segregation of inpatient and outpatient teams, proper use of protective gear, and the use of ambulatory surgery centers (ascs) to provide elective, yet ultimately essential, surgical care while conserving resources and protecting the health of patients and health-care providers. killing more than pain: etiology and remedy for an opioid crisis [SEP] the search for effective pain relief has been ever present across human history. this chapter describes the etiology and epidemiology of the opioid crisis using public health and health belief model frameworks and reviews approaches that have been applied to address supply (e.g., overprescribing) and demand (e.g., medication treatments) sides of the equation.
    coronavirus early symptoms nan impact of antibacterials on subsequent resistance and clinical outcomes in adult patients with viral pneumonia: an opportunity for stewardship [SEP] introduction: respiratory viruses are increasingly recognized as significant etiologies of pneumonia among hospitalized patients. method: this was a single-center retrospective cohort study to evaluate the impact of antibacterials in viral pneumonia on clinical outcomes and subsequent multidrug-resistant organism (mdro) infections/colonization.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss val_cosine_ndcg@10
0.7375 500 4.4901 -
1.0 678 - 0.5159

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.50.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for StephKeddy/sbert-IR-covid-search

Finetuned
(348)
this model

Papers for StephKeddy/sbert-IR-covid-search

Evaluation results