Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
12
This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The purpose is to create closer semantic relations with certain snack/food names (ie chips -> potato chips).
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False, 'architecture': 'MPNetModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Weike1000/Snack_Embed")
# Run inference
sentences = [
'cheddar cheese',
'Cheddar Cheese Block',
'Cucumber',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9452, 0.1340],
# [0.9452, 1.0000, 0.1356],
# [0.1340, 0.1356, 1.0000]])
sentence_0 and sentence_1| sentence_0 | sentence_1 | |
|---|---|---|
| type | string | string |
| details |
|
|
| sentence_0 | sentence_1 |
|---|---|
fudge stripes |
Keebler Fudge Stripes Cookies |
gummy bears bag |
Gummy Bears |
kind bar caramel |
Kind Bar Caramel Almond & Sea Salt |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
per_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 1000multi_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 1000max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss |
|---|---|---|
| 6.25 | 500 | 0.0756 |
| 12.5 | 1000 | 0.0396 |
| 18.75 | 1500 | 0.033 |
| 25.0 | 2000 | 0.0283 |
| 31.25 | 2500 | 0.0257 |
| 37.5 | 3000 | 0.0249 |
| 43.75 | 3500 | 0.0248 |
| 50.0 | 4000 | 0.019 |
| 56.25 | 4500 | 0.0242 |
| 62.5 | 5000 | 0.0203 |
| 68.75 | 5500 | 0.0205 |
| 75.0 | 6000 | 0.0225 |
| 81.25 | 6500 | 0.0183 |
| 87.5 | 7000 | 0.0227 |
| 93.75 | 7500 | 0.0224 |
| 100.0 | 8000 | 0.022 |
| 106.25 | 8500 | 0.0244 |
| 112.5 | 9000 | 0.0231 |
| 118.75 | 9500 | 0.021 |
| 125.0 | 10000 | 0.0215 |
| 131.25 | 10500 | 0.0166 |
| 137.5 | 11000 | 0.0186 |
| 143.75 | 11500 | 0.0211 |
| 150.0 | 12000 | 0.0208 |
| 156.25 | 12500 | 0.0214 |
| 162.5 | 13000 | 0.0207 |
| 168.75 | 13500 | 0.0216 |
| 175.0 | 14000 | 0.0214 |
| 181.25 | 14500 | 0.0209 |
| 187.5 | 15000 | 0.0197 |
| 193.75 | 15500 | 0.022 |
| 200.0 | 16000 | 0.0183 |
| 206.25 | 16500 | 0.0189 |
| 212.5 | 17000 | 0.0188 |
| 218.75 | 17500 | 0.0163 |
| 225.0 | 18000 | 0.0209 |
| 231.25 | 18500 | 0.0185 |
| 237.5 | 19000 | 0.0211 |
| 243.75 | 19500 | 0.02 |
| 250.0 | 20000 | 0.0206 |
| 256.25 | 20500 | 0.0222 |
| 262.5 | 21000 | 0.0185 |
| 268.75 | 21500 | 0.0205 |
| 275.0 | 22000 | 0.0165 |
| 281.25 | 22500 | 0.0185 |
| 287.5 | 23000 | 0.0164 |
| 293.75 | 23500 | 0.0191 |
| 300.0 | 24000 | 0.0197 |
| 306.25 | 24500 | 0.0195 |
| 312.5 | 25000 | 0.0185 |
| 318.75 | 25500 | 0.017 |
| 325.0 | 26000 | 0.0184 |
| 331.25 | 26500 | 0.0184 |
| 337.5 | 27000 | 0.0211 |
| 343.75 | 27500 | 0.0182 |
| 350.0 | 28000 | 0.0189 |
| 356.25 | 28500 | 0.0172 |
| 362.5 | 29000 | 0.0195 |
| 368.75 | 29500 | 0.0221 |
| 375.0 | 30000 | 0.0197 |
| 381.25 | 30500 | 0.0228 |
| 387.5 | 31000 | 0.0173 |
| 393.75 | 31500 | 0.0191 |
| 400.0 | 32000 | 0.0203 |
| 406.25 | 32500 | 0.0202 |
| 412.5 | 33000 | 0.0186 |
| 418.75 | 33500 | 0.0178 |
| 425.0 | 34000 | 0.018 |
| 431.25 | 34500 | 0.0192 |
| 437.5 | 35000 | 0.0186 |
| 443.75 | 35500 | 0.0211 |
| 450.0 | 36000 | 0.0209 |
| 456.25 | 36500 | 0.0216 |
| 462.5 | 37000 | 0.0201 |
| 468.75 | 37500 | 0.0227 |
| 475.0 | 38000 | 0.02 |
| 481.25 | 38500 | 0.018 |
| 487.5 | 39000 | 0.0218 |
| 493.75 | 39500 | 0.0237 |
| 500.0 | 40000 | 0.0208 |
| 506.25 | 40500 | 0.0185 |
| 512.5 | 41000 | 0.0188 |
| 518.75 | 41500 | 0.0188 |
| 525.0 | 42000 | 0.0168 |
| 531.25 | 42500 | 0.017 |
| 537.5 | 43000 | 0.0165 |
| 543.75 | 43500 | 0.0197 |
| 550.0 | 44000 | 0.0159 |
| 556.25 | 44500 | 0.0224 |
| 562.5 | 45000 | 0.0179 |
| 568.75 | 45500 | 0.0188 |
| 575.0 | 46000 | 0.0203 |
| 581.25 | 46500 | 0.018 |
| 587.5 | 47000 | 0.0195 |
| 593.75 | 47500 | 0.0194 |
| 600.0 | 48000 | 0.0205 |
| 606.25 | 48500 | 0.0185 |
| 612.5 | 49000 | 0.0208 |
| 618.75 | 49500 | 0.0205 |
| 625.0 | 50000 | 0.0201 |
| 631.25 | 50500 | 0.0175 |
| 637.5 | 51000 | 0.0171 |
| 643.75 | 51500 | 0.0184 |
| 650.0 | 52000 | 0.0228 |
| 656.25 | 52500 | 0.0203 |
| 662.5 | 53000 | 0.0222 |
| 668.75 | 53500 | 0.0188 |
| 675.0 | 54000 | 0.0235 |
| 681.25 | 54500 | 0.0182 |
| 687.5 | 55000 | 0.0215 |
| 693.75 | 55500 | 0.018 |
| 700.0 | 56000 | 0.0227 |
| 706.25 | 56500 | 0.0185 |
| 712.5 | 57000 | 0.0179 |
| 718.75 | 57500 | 0.0176 |
| 725.0 | 58000 | 0.0233 |
| 731.25 | 58500 | 0.0213 |
| 737.5 | 59000 | 0.0208 |
| 743.75 | 59500 | 0.015 |
| 750.0 | 60000 | 0.0199 |
| 756.25 | 60500 | 0.0197 |
| 762.5 | 61000 | 0.0199 |
| 768.75 | 61500 | 0.0209 |
| 775.0 | 62000 | 0.0185 |
| 781.25 | 62500 | 0.0183 |
| 787.5 | 63000 | 0.0169 |
| 793.75 | 63500 | 0.0176 |
| 800.0 | 64000 | 0.0206 |
| 806.25 | 64500 | 0.0186 |
| 812.5 | 65000 | 0.0181 |
| 818.75 | 65500 | 0.0179 |
| 825.0 | 66000 | 0.0184 |
| 831.25 | 66500 | 0.0157 |
| 837.5 | 67000 | 0.0181 |
| 843.75 | 67500 | 0.0174 |
| 850.0 | 68000 | 0.0185 |
| 856.25 | 68500 | 0.0213 |
| 862.5 | 69000 | 0.0181 |
| 868.75 | 69500 | 0.02 |
| 875.0 | 70000 | 0.0141 |
| 881.25 | 70500 | 0.0168 |
| 887.5 | 71000 | 0.0218 |
| 893.75 | 71500 | 0.0188 |
| 900.0 | 72000 | 0.0139 |
| 906.25 | 72500 | 0.0188 |
| 912.5 | 73000 | 0.022 |
| 918.75 | 73500 | 0.0154 |
| 925.0 | 74000 | 0.0165 |
| 931.25 | 74500 | 0.0186 |
| 937.5 | 75000 | 0.0191 |
| 943.75 | 75500 | 0.0188 |
| 950.0 | 76000 | 0.0176 |
| 956.25 | 76500 | 0.0218 |
| 962.5 | 77000 | 0.0185 |
| 968.75 | 77500 | 0.0193 |
| 975.0 | 78000 | 0.0218 |
| 981.25 | 78500 | 0.0161 |
| 987.5 | 79000 | 0.0216 |
| 993.75 | 79500 | 0.0225 |
| 1000.0 | 80000 | 0.0194 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
sentence-transformers/all-mpnet-base-v2