Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
11
This is a sentence-transformers model finetuned from google-bert/bert-base-uncased. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gavinqiangli/my-awesome-bi-encoder")
# Run inference
sentences = [
"How can the drive from Edmonton to Auckland be described, and how do these cities' attractions compare to those in Vancouver?",
'How can the drive from Edmonton to Auckland be described, and how does the history of these cities compare and contrast to the history of Vancouver?',
'Which optional subjects can I choose for the IAS exam?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
BinaryClassificationEvaluator| Metric | Value |
|---|---|
| cosine_accuracy | 0.7644 |
| cosine_accuracy_threshold | 0.8147 |
| cosine_f1 | 0.6959 |
| cosine_f1_threshold | 0.7402 |
| cosine_precision | 0.5946 |
| cosine_recall | 0.839 |
| cosine_ap | 0.7113 |
| dot_accuracy | 0.74 |
| dot_accuracy_threshold | 153.501 |
| dot_f1 | 0.6711 |
| dot_f1_threshold | 133.2327 |
| dot_precision | 0.5683 |
| dot_recall | 0.8192 |
| dot_ap | 0.6542 |
| manhattan_accuracy | 0.7665 |
| manhattan_accuracy_threshold | 176.4289 |
| manhattan_f1 | 0.6973 |
| manhattan_f1_threshold | 218.9676 |
| manhattan_precision | 0.59 |
| manhattan_recall | 0.8522 |
| manhattan_ap | 0.7109 |
| euclidean_accuracy | 0.7665 |
| euclidean_accuracy_threshold | 8.0922 |
| euclidean_f1 | 0.697 |
| euclidean_f1_threshold | 9.7942 |
| euclidean_precision | 0.5946 |
| euclidean_recall | 0.8421 |
| euclidean_ap | 0.7109 |
| max_accuracy | 0.7665 |
| max_accuracy_threshold | 176.4289 |
| max_f1 | 0.6973 |
| max_f1_threshold | 218.9676 |
| max_precision | 0.5946 |
| max_recall | 0.8522 |
| max_ap | 0.7113 |
sentence_0, sentence_1, and label| sentence_0 | sentence_1 | label | |
|---|---|---|---|
| type | string | string | int |
| details |
|
|
|
| sentence_0 | sentence_1 | label |
|---|---|---|
Are Jewish people the most intelligent in the universe? |
Why are Jewish people so intelligent? |
1 |
How do I become a good lawyer? What are the qualities of a good lawyer? |
How can someone become a successful lawyer? |
1 |
Why is China going to the Moon? |
What does China want with the moon? |
1 |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: stepsper_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 1multi_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseeval_use_gather_object: Falsebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss | max_ap |
|---|---|---|---|
| 0.0772 | 500 | 0.0796 | - |
| 0.1543 | 1000 | 0.0205 | 0.6878 |
| 0.2315 | 1500 | 0.0197 | - |
| 0.3087 | 2000 | 0.0201 | 0.6864 |
| 0.3859 | 2500 | 0.0185 | - |
| 0.4630 | 3000 | 0.0161 | 0.6933 |
| 0.5402 | 3500 | 0.0163 | - |
| 0.6174 | 4000 | 0.0172 | 0.7089 |
| 0.6946 | 4500 | 0.0172 | - |
| 0.7717 | 5000 | 0.0143 | 0.7072 |
| 0.8489 | 5500 | 0.0129 | - |
| 0.9261 | 6000 | 0.0124 | 0.7112 |
| 1.0 | 6479 | - | 0.7113 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
google-bert/bert-base-uncased