SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'can magnets float on superconductors',
    'We discuss the spatial structure of the Cooper pair in dilute neutron matter\nand neutron-rich nuclei by means of the BCS theory and the\nSkyrme-Hartree-Fock-Bogioliubov model, respectively. The neutron pairing in\ndilute neutron matter is close to the region of the BCS-BEC crossover in a wide\ndensity range, giving rise to spatially compact Cooper pair whose size is\nsmaller than the average interaparticle distance. This behavior extends to\nmoderate low density ($\\sim 10^{-1}$ of the saturation density) where the\nCooper pair size becomes smallerst ($\\sim 5$ fm). The Cooper pair in finite\nnuclei also exhibits the spatial correlation favoring the coupling of neutrons\nat small relative distances $r \\lesim 3$ fm with large probability.\nNeutron-rich nuclei having small neutron separation energy may provide us\nopportunity to probe the spatial correlation since the neutron pairing and the\nspatial correlation persists also in an area of low-density neutron\ndistribution extending from the surface to far outside the nucleus.',
    'Brian David Josephson (born 4 January 1940) is a British theoretical physicist and emeritus professor at the University of Cambridge. He shared the 1973 Nobel Prize in Physics with Leo Esaki and Ivar Giaever for his discovery of the Josephson effect, made in 1962 when he was a Ph.D. student at Cambridge.\nJosephson has spent his academic career as a member of the Theory of Condensed Matter Group in Cambridge\'s Cavendish Laboratory. He has been a Fellow of Trinity College, Cambridge, since 1962, and served as Professor of Physics from 1974 until 2007.\nIn the early 1970s, Josephson took up Transcendental Meditation and turned his attention to issues outside the boundaries of mainstream science. He set up the Mind–Matter Unification Project at Cavendish to explore the idea of intelligence in nature, the relationship between quantum mechanics and consciousness, and the synthesis of science and Eastern mysticism, broadly known as quantum mysticism. He has expressed support for topics such as parapsychology, water memory and cold fusion, which has made him a focus of criticism from fellow scientists.\n\n\n== Education ==\nBrian David Josephson was born on 4 January 1940 in Cardiff, Wales, to Jewish parents, Abraham Josephson and Mimi Weisbard. He attended Cardiff High School, where he credits some of the school masters for having helped him, particularly the physics master, Emrys Jones, who introduced him to theoretical physics. In 1957, he went up to Cambridge, where he initially read mathematics at Trinity College, Cambridge. After completing Maths Part II in two years, and finding it somewhat sterile, he decided to switch to physics.\nJosephson was known at Cambridge as a brilliant but shy student. Physicist John Waldram recalled overhearing Nicholas Kurti, an examiner from Oxford, discuss Josephson\'s exam results with David Shoenberg, Reader in Physics at Cambridge, and asking: "Who is this chap Josephson? He seems to be going through the theory like a knife through butter.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0149,  0.1589],
#         [-0.0149,  1.0000,  0.0733],
#         [ 0.1589,  0.0733,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 11,515 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 3 tokens
    • mean: 9.41 tokens
    • max: 33 tokens
    • min: 14 tokens
    • mean: 341.81 tokens
    • max: 384 tokens
    • min: 0.0
    • mean: 0.25
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    current density definition h = 0, we see as expected a conventional diamagnetic Meissner effect. In- terestingly, even for a very small exchange field h/∆0 = 0.05 we observe that χ vs. T displays a re-entrant behavior. To understand the physical origin of this behavior, we first note that when the normal metal length satisfies L ≫ξS, the proximity-induced minigap is much smaller than the super- conducting gap ∆0. The minigap is determined by the Thou- less energy [29] εT = D/L2, so that longer samples have smaller minigaps. Now, in the presence of exchange fields h/∆0 ∼εT /∆0 = (ξS/L)2, the triplet proximity effect be- comes resonant and results in a zero-energy peak in the den- sity of states [30]. For smaller minigaps (larger L), only a small exchange field is needed to get sufficiently close to res- onance, which explains why a long normal metal can be influ- enced by a small h. As seen in the inset of Fig. 2 (a), a change in ξS/L will also give re-entrant behaviour as long as h is low- ered accordingly. Since εT is... 0.0
    polaron effect in superconductors High-temperature superconductivity (high-Tc or HTS) is superconductivity in materials with a critical temperature (the temperature below which the material behaves as a superconductor) above 77 K (−196.2 °C; −321.1 °F), the boiling point of liquid nitrogen. They are "high-temperature" only relative to previously known superconductors, which function only closer to absolute zero. The first high-temperature superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller. Although the critical temperature is around 35.1 K (−238.1 °C; −396.5 °F), this material was modified by Ching-Wu Chu to make the first high-temperature superconductor with critical temperature 93 K (−180.2 °C; −292.3 °F). Bednorz and Müller were awarded the Nobel Prize in Physics in 1987 "for their important break-through in the discovery of superconductivity in ceramic materials". Most high-Tc materials are type-II superconductors.
    The major advantage of high-temperature superconductors is tha...
    1.0
    why hbcco is a SC material High-temperature superconductivity (high-Tc or HTS) is superconductivity in materials with a critical temperature (the temperature below which the material behaves as a superconductor) above 77 K (−196.2 °C; −321.1 °F), the boiling point of liquid nitrogen. They are "high-temperature" only relative to previously known superconductors, which function only closer to absolute zero. The first high-temperature superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller. Although the critical temperature is around 35.1 K (−238.1 °C; −396.5 °F), this material was modified by Ching-Wu Chu to make the first high-temperature superconductor with critical temperature 93 K (−180.2 °C; −292.3 °F). Bednorz and Müller were awarded the Nobel Prize in Physics in 1987 "for their important break-through in the discovery of superconductivity in ceramic materials". Most high-Tc materials are type-II superconductors.
    The major advantage of high-temperature superconductors is tha...
    1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.6944 500 0.1081
1.3889 1000 0.0745
2.0833 1500 0.0631
2.7778 2000 0.0485
3.4722 2500 0.0422

Framework Versions

  • Python: 3.9.6
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.8.0
  • Accelerate: 1.10.1
  • Datasets: 4.3.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shreyaspullehf/superconductor-search-v1

Finetuned
(367)
this model

Paper for shreyaspullehf/superconductor-search-v1