Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
9
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The majority of these nations are now republics or part of republics.\n',
'एतेषु अधिकांशाः देशाः अधुना गणराज्यानि उत गणराज्यानां भागाः वा सन्ति।\n',
'अत्र मूलसञ्चिका (source file) विद्यते। pdflatex इत्यादेशमुपयुज्य सङ्कलयामि।',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.8049, 0.1296],
# [0.8049, 1.0000, 0.1642],
# [0.1296, 0.1642, 1.0000]])
eval-en-saTranslationEvaluator| Metric | Value |
|---|---|
| src2trg_accuracy | 0.866 |
| trg2src_accuracy | 0.868 |
| mean_accuracy | 0.867 |
sentence_0 and sentence_1| sentence_0 | sentence_1 | |
|---|---|---|
| type | string | string |
| details |
|
|
| sentence_0 | sentence_1 |
|---|---|
"For the purpose of this tutorial, we shall list these instructions in slides." |
अस्य पाठस्य आनुकूल्याय स्लैड् द्वारा आदेशान् वदामः । |
Gandharva prajapati, Vishwakarma and mana swaroop. Please protect Gandharva Brahmins and Kshatriyas. Riku and Sama have an apsara named Ashti. Please protect us. This sacrifice is an offering for them. Swaha for them. (43) |
प्र॒जाप॑तिर्वि॒श्वक॑र्मा॒ मनो॑ गन्ध॒र्वस्तस्य॑ऽऋ॒क्सा॒मान्य॑प्स॒रस॒ऽएष्ट॑यो॒ नाम॑। स न॑ऽइ॒दं ब्रह्म॑ क्ष॒त्रं पा॑तु॒ तस्मै॒ स्वाहा॒ वाट् ताभ्यः॒ स्वाहा॑ ॥ (४३) |
Many things are sold to treat acne, the most popular being benzoyl peroxide. |
आक्ने-चिकित्सार्थं नाइकानि वस्तूनि विक्रीयन्ते, तेषु अतिजनप्रियं बेन्ज़ोय्ल् पराक्सैड्। |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: stepsper_device_train_batch_size: 4per_device_eval_batch_size: 4num_train_epochs: 15multi_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 4per_device_eval_batch_size: 4per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 15max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss | eval-en-sa_mean_accuracy |
|---|---|---|---|
| 0.0078 | 500 | 0.2715 | - |
| 0.0155 | 1000 | 0.0402 | - |
| 0.0233 | 1500 | 0.0323 | - |
| 0.0310 | 2000 | 0.0305 | - |
| 0.0388 | 2500 | 0.0169 | - |
| 0.0465 | 3000 | 0.0122 | - |
| 0.0543 | 3500 | 0.011 | - |
| 0.0620 | 4000 | 0.0134 | - |
| 0.0698 | 4500 | 0.0081 | - |
| 0.0776 | 5000 | 0.0177 | - |
| 0.0853 | 5500 | 0.0195 | - |
| 0.0931 | 6000 | 0.014 | - |
| 0.1008 | 6500 | 0.0226 | - |
| 0.1086 | 7000 | 0.0122 | - |
| 0.1163 | 7500 | 0.0156 | - |
| 0.1241 | 8000 | 0.0192 | - |
| 0.1318 | 8500 | 0.023 | - |
| 0.1396 | 9000 | 0.0153 | - |
| 0.1474 | 9500 | 0.0275 | - |
| 0.1551 | 10000 | 0.0272 | - |
| 0.1629 | 10500 | 0.0222 | - |
| 0.1706 | 11000 | 0.0134 | - |
| 0.1784 | 11500 | 0.0216 | - |
| 0.1861 | 12000 | 0.0152 | - |
| 0.1939 | 12500 | 0.0104 | - |
| 0.2016 | 13000 | 0.0178 | - |
| 0.2094 | 13500 | 0.0209 | - |
| 0.2171 | 14000 | 0.0211 | - |
| 0.2249 | 14500 | 0.0198 | - |
| 0.2327 | 15000 | 0.0212 | - |
| 0.2404 | 15500 | 0.0177 | - |
| 0.2482 | 16000 | 0.0221 | - |
| 0.2559 | 16500 | 0.0206 | - |
| 0.2637 | 17000 | 0.0181 | - |
| 0.2714 | 17500 | 0.0165 | - |
| 0.2792 | 18000 | 0.0145 | - |
| 0.2869 | 18500 | 0.0139 | - |
| 0.2947 | 19000 | 0.0198 | - |
| 0.3025 | 19500 | 0.0139 | - |
| 0.3102 | 20000 | 0.0177 | - |
| 0.3180 | 20500 | 0.0104 | - |
| 0.3257 | 21000 | 0.0149 | - |
| 0.3335 | 21500 | 0.0144 | - |
| 0.3412 | 22000 | 0.0168 | - |
| 0.3490 | 22500 | 0.0156 | - |
| 0.3567 | 23000 | 0.0132 | - |
| 0.3645 | 23500 | 0.0152 | - |
| 0.3723 | 24000 | 0.0147 | - |
| 0.3800 | 24500 | 0.0142 | - |
| 0.3878 | 25000 | 0.018 | - |
| 0.3955 | 25500 | 0.0246 | - |
| 0.4033 | 26000 | 0.0105 | - |
| 0.4110 | 26500 | 0.0097 | - |
| 0.4188 | 27000 | 0.0145 | - |
| 0.4265 | 27500 | 0.0136 | - |
| 0.4343 | 28000 | 0.0182 | - |
| 0.4421 | 28500 | 0.016 | - |
| 0.4498 | 29000 | 0.0088 | - |
| 0.4576 | 29500 | 0.0106 | - |
| 0.4653 | 30000 | 0.02 | - |
| 0.4731 | 30500 | 0.0153 | - |
| 0.4808 | 31000 | 0.0118 | - |
| 0.4886 | 31500 | 0.0141 | - |
| 0.4963 | 32000 | 0.0194 | - |
| 0.5041 | 32500 | 0.0149 | - |
| 0.5119 | 33000 | 0.0099 | - |
| 0.5196 | 33500 | 0.0212 | - |
| 0.5274 | 34000 | 0.0112 | - |
| 0.5351 | 34500 | 0.0175 | - |
| 0.5429 | 35000 | 0.0149 | - |
| 0.5506 | 35500 | 0.0142 | - |
| 0.5584 | 36000 | 0.0174 | - |
| 0.5661 | 36500 | 0.0146 | - |
| 0.5739 | 37000 | 0.0186 | - |
| 0.5816 | 37500 | 0.0167 | - |
| 0.5894 | 38000 | 0.0356 | - |
| 0.5972 | 38500 | 0.0195 | - |
| 0.6049 | 39000 | 0.0165 | - |
| 0.6127 | 39500 | 0.0202 | - |
| 0.6204 | 40000 | 0.0142 | - |
| 0.6282 | 40500 | 0.0104 | - |
| 0.6359 | 41000 | 0.0104 | - |
| 0.6437 | 41500 | 0.0155 | - |
| 0.6514 | 42000 | 0.0056 | - |
| 0.6592 | 42500 | 0.0102 | - |
| 0.6670 | 43000 | 0.0096 | - |
| 0.6747 | 43500 | 0.0219 | - |
| 0.6825 | 44000 | 0.0106 | - |
| 0.6902 | 44500 | 0.0129 | - |
| 0.6980 | 45000 | 0.0152 | - |
| 0.7057 | 45500 | 0.0158 | - |
| 0.7135 | 46000 | 0.0082 | - |
| 0.7212 | 46500 | 0.0159 | - |
| 0.7290 | 47000 | 0.0184 | - |
| 0.7368 | 47500 | 0.0101 | - |
| 0.7445 | 48000 | 0.0101 | - |
| 0.7523 | 48500 | 0.0115 | - |
| 0.7600 | 49000 | 0.0111 | - |
| 0.7678 | 49500 | 0.0116 | - |
| 0.7755 | 50000 | 0.0085 | 0.867 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}