SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("StephKeddy/sbert-IR-covid-search-v2")
# Run inference
sentences = [
    'coronavirus quarantine',
    'age profile of susceptibility, mixing, and social distancing shape the dynamics of the novel coronavirus disease 2019 outbreak in china [SEP] strict interventions were successful to control the novel coronavirus (covid-19) outbreak in china. daily contacts were reduced 7-9 fold during the covid-19 social distancing period, with most interactions restricted to the household.',
    'the economic impact of quarantine: sars in toronto as a case study [SEP] objectives over time, quarantine has become a classic public health intervention and has been used repeatedly when newly emerging infectious diseases have threatened to spread throughout a population. results our results indicate that quarantine is effective in containing newly emerging infectious diseases, and also cost saving when compared to not implementing a widespread containment mechanism.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5333
cosine_accuracy@3 0.8
cosine_accuracy@5 0.8
cosine_accuracy@10 0.8667
cosine_precision@1 0.5333
cosine_precision@3 0.5111
cosine_precision@5 0.4267
cosine_precision@10 0.44
cosine_recall@1 0.0034
cosine_recall@3 0.0097
cosine_recall@5 0.0141
cosine_recall@10 0.0298
cosine_ndcg@10 0.4491
cosine_mrr@10 0.654
cosine_map@100 0.1424

Training Details

Training Dataset

Unnamed Dataset

  • Size: 29,124 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 5 tokens
    • mean: 18.59 tokens
    • max: 50 tokens
    • min: 21 tokens
    • mean: 98.07 tokens
    • max: 236 tokens
    • min: 23 tokens
    • mean: 81.59 tokens
    • max: 180 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    Looking for studies identifying ways to diagnose Covid-19 more rapidly. a line immunoassay utilizing recombinant nucleocapsid proteins for detection of antibodies to human coronaviruses [SEP] most coronaviruses infecting humans cause mild diseases, whereas severe acute respiratory syndrome (sars)-associated coronavirus is an extremely dangerous pathogen. with this new technique, we found that recently identified nl63 and hku1 contribute significantly to the overall spectrum of coronavirus infections. appealing for efficient, well organized clinical trials on covid-19 [SEP] the rapid emergence of clinical trials on covid-19 stimulated a wave of discussion in scientific community. our analysis focused on the issues of stage, design, randomization, blinding, primary endpoints definition and sample size of these trials.
    Seeking information on best practices for activities and duration of quarantine for those exposed and/ infected to COVID-19 virus. chemical, biologic, and nuclear quarantine [SEP] chemical, biologic, and nuclear quarantine practical strategies against the novel coronavirus and covid-19the imminent global threat [SEP] the last month of 2019 harbingered the emergence of a viral outbreak that is now a major public threat globally. in-house isolation or quarantine of suspected cases to keep hospital admissions manageable and prevent in-hospital spread of the virus, and promoting general awareness about transmission routes are the practical strategies used to tackle the spread of covid-19.
    what are the best masks for preventing infection by Covid-19? role of viral bioaerosols in nosocomial infections and measures for prevention and control [SEP] the presence of patients with diverse pathologies in hospitals results in an environment that can be rich in various microorganisms including respiratory and enteric viruses, leading to outbreaks in hospitals or spillover infections to the community. these pathogens could transmit through direct or indirect physical contact, droplets or aerosols, with increasing evidence suggesting the importance of aerosol transmission in nosocomial infections of respiratory and enteric viruses. face mask use and control of respiratory virus transmission in households [SEP] many countries are stockpiling face masks for use as a nonpharmaceutical intervention to control virus transmission during an influenza pandemic. we found that adherence to mask use significantly reduced the risk for ili-associated infection, but 50 of participants wore masks most of the time.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss val_cosine_ndcg@10
0.2746 500 3.8945 0.4491

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.50.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for StephKeddy/sbert-IR-covid-search-v2

Finetuned
(350)
this model

Papers for StephKeddy/sbert-IR-covid-search-v2

Evaluation results