Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
12
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'If the difference between the length and breadth of a rectangle is 23 m and its perimeter is 206 m, what is its area? A. 2510. B. 2530. C. 2515. D. 2520.',
"A circle with radius $3$ has a sector with a $345^\\circ$ central angle. What is the area of the sector? ${9\\pi}$ $\\color{#9D38BD}{345^\\circ}$ ${\\dfrac{69}{8}\\pi}$ ${3}$\n\nHints:\nFirst, calculate the area of the whole circle. Then the area of the sector is some fraction of the whole circle's area. $A_c = \\pi r^2$ $A_c = \\pi (3)^2$ $A_c = 9\\pi$ The ratio between the sector's central angle $\\theta$ and $360^\\circ$ is equal to the ratio between the sector's area, $A_s$ , and the whole circle's area, $A_c$ $\\dfrac{\\theta}{360^\\circ} = \\dfrac{A_s}{A_c}$ $\\dfrac{345^\\circ}{360^\\circ} = \\dfrac{A_s}{9\\pi}$ $\\dfrac{23}{24} = \\dfrac{A_s}{9\\pi}$ $\\dfrac{23}{24} \\times 9\\pi = A_s$ $\\dfrac{69}{8}\\pi = A_s$",
'The area of a rectangular surface is calculated as its length multiplied by its width.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
sentence_0, sentence_1, and sentence_2| sentence_0 | sentence_1 | sentence_2 | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| sentence_0 | sentence_1 | sentence_2 |
|---|---|---|
Back in time, dog sledding was the only method of transportation in frozen parts of the world. You, too, can participate in this time-honored tradition after looking into dog sledding adventures and tours. Alaska Dog Sledding It offers two dog sledding tours in Alaska, the Golsovia Dog Trip and the Iditarod Dog Trip. On both trips, you will learn how to drive your own dog team. The Golsovia Dog Trip lasts six days. The Iditarod Dog Trip is held just before the internationally famous Iditarod race so that participants will be able to attend the Iditarod events. For more information, please see: Alaska Dog Sledding Greenland Expedition Specialists Travel through the shining ice and snow on the land of East Greenland via dog sledding. During this dog sledding vacation, you will be camping during the cold of winter and will take part in caring for the dogs and the camps. They also organize and guide kiting trips, sea kayaking trips and mountaineering vacations. For more informa... |
Dog camp |
Dog camp Dog camp: Dog camp is a form of vacation for owners accompanied by their dogs with dog-centric activities ranging from casual recreational playtime to serious obedience or sport training. In many dog camps dogs can play and socialize throughout the day while supervised by their owners. Some of |
My brother is 3 years elder to me.My father was 28 years of age when my sister was born while my mother was 26 years of age when I was born.If my sister was 4 years of age when my brother was born,then,what was the age of my father and mother respectively when my brother was born ? A. 32 yrs,23 yrs. B. 35 yrs,29 yrs. C. 35 yrs,33 yrs. D. None of these. |
Abby is $3$ years old. Her brother Ben is $4$ years older than she is. How old is Ben? |
Ishaan is $3$ times as old as Christopher and is also $14$ years older than Christopher. How old is Ishaan? |
Blood specimen for neonatal thyroid screening is obtained at: A. Cord blood. B. 24 hours after bih. C. 48 hours after bih. D. 72 hours after bih. |
TRH stimulation test |
Blood gas test |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
per_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 1multi_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss |
|---|---|---|
| 0.0690 | 500 | 0.6132 |
| 0.1380 | 1000 | 0.6121 |
| 0.2070 | 1500 | 0.5883 |
| 0.2760 | 2000 | 0.5838 |
| 0.3450 | 2500 | 0.5689 |
| 0.4140 | 3000 | 0.55 |
| 0.4830 | 3500 | 0.5422 |
| 0.5520 | 4000 | 0.5257 |
| 0.6210 | 4500 | 0.5099 |
| 0.6900 | 5000 | 0.5008 |
| 0.7590 | 5500 | 0.5066 |
| 0.8280 | 6000 | 0.4941 |
| 0.8970 | 6500 | 0.4881 |
| 0.9661 | 7000 | 0.4898 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
sentence-transformers/all-MiniLM-L6-v2