Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
11
This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("MistyDragon/bge-small-finetuned")
# Run inference
sentences = [
'search_document: 386\u2003 ◾\u2003 Production and Operations Management Systems\n10.8 Location Decisions Using the Transportation \nModel\nTransportation costs are a primary concern for a new start-up company or division. \nThis also applies to an existing company that intends to relocate. Finally, it should \nbe common practice to reevaluate the current location of an ongoing business so \nthat the impact of changing conditions and new opportunities are not overlooked. \nWhen shipping costs are critical for the location decision, the transportation model \n(TM) can determine minimum cost or maximum profit solutions that specify opti-\nmal shipping patterns between many locations.\nTransportation costs include the combined costs of moving raw materials to \nthe plant and of transporting finished goods from the plant to one or more ware -\nhouses. It is easier to explain the TM with the following numerical example than \nwith abstract math equations. A doll manufacturer has decided to build a fac -\ntory in the center of the United States. More specifically, Missouri and Ohio are \nidentified as the potential states. Several sites in the two regions have been identi -\nfied. Two cities have been chosen as candidates. These are St Louis, Missouri, and \nColumbus, Ohio. Real-estate costs are about equal in both. The problem is to \nselect one of the two cities. The decision will be based on the shipping (transporta -\ntion) costs.\n10.8.1 Shipping (Transportation or Distribution) Costs\nThe average cost of shipping (also known as the cost of distribution or cost of trans-\nportation) the components that the company uses to the Columbus, Ohio, location \nis $6 per production unit. Shipping costs average only $3 per unit to St Louis, \nMissouri. In TM terminology, shippers (suppliers, in this case) are called sources or \norigins. Those receiving shipments (producers, in this case) are called destinations.\nThe average cost of shipping from the Columbus, Ohio, location to the \n market—distributor’s warehouse is $2 per unit. The average cost of shipping from \nSt Louis, Missouri, to the market—distributor’s warehouse is $4 per unit. The same \nterminology applies. The shipper is the producer (source or origin) and the receivers \nare the distributors or customers (destinations). The configuration of origins and \ndestinations are shown in Figure 10.1.\nTotal transportation costs to and from the Columbus, Ohio, plant are \n$6 + $2 = $8 per unit; for St Louis, Missouri, they are $3 + $4 = $7. Other things \nbeing equal, the company should choose St Louis, Missouri. However, the real \nworld is not as simple as this.\nThe problem becomes more complex when there are a number of origins com -\npeting for shipments to a number of destinations. We will illustrate the com -\nplexity of the problem and its solution using the example of Rukna Auto Parts \nManufacturing Company.',
'search_query: In the context of the Transportation Model (TM), what are the primary considerations for a company when deciding on a new location for its operations?',
'search_query: What is the primary objective of loading in the production scheduling process?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7613, 0.4329],
# [0.7613, 1.0000, 0.4239],
# [0.4329, 0.4239, 1.0000]])
dim_384InformationRetrievalEvaluator with these parameters:{
"truncate_dim": 384
}
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.6894 |
| cosine_accuracy@3 | 0.803 |
| cosine_accuracy@5 | 0.8485 |
| cosine_accuracy@10 | 0.8864 |
| cosine_precision@1 | 0.6894 |
| cosine_precision@3 | 0.2677 |
| cosine_precision@5 | 0.1697 |
| cosine_precision@10 | 0.0886 |
| cosine_recall@1 | 0.6894 |
| cosine_recall@3 | 0.803 |
| cosine_recall@5 | 0.8485 |
| cosine_recall@10 | 0.8864 |
| cosine_ndcg@10 | 0.7854 |
| cosine_mrr@10 | 0.7531 |
| cosine_map@100 | 0.7569 |
positive and anchor| positive | anchor | |
|---|---|---|
| type | string | string |
| details |
|
|
| positive | anchor |
|---|---|
search_document: 9192 0.9207 0.9222 0.9236 0.9215 0.9265 0.9279 0.9292 0.9306 0.9319 |
search_query: What is the value of the function at x = 1.5? |
search_document: 72 • Quality Management: Theory and Applicatio n |
search_query: What is the primary difference between tertiary and higher education as described in the document? |
search_document: 273 |
search_query: In the context of the document, which company developed a program to improve service quality that is used as a benchmark for continuous process improvement by all KFC stores? |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: epochper_device_eval_batch_size: 16gradient_accumulation_steps: 8learning_rate: 2e-05num_train_epochs: 4lr_scheduler_type: cosinewarmup_ratio: 0.1bf16: Truetf32: Falseload_best_model_at_end: Trueoptim: adamw_torch_fusedpush_to_hub: Truehub_model_id: MistyDragon/bge-small-finetunedpush_to_hub_model_id: bge-small-finetunedbatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 8per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 8eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Falselocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Trueresume_from_checkpoint: Nonehub_model_id: MistyDragon/bge-small-finetunedhub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: bge-small-finetunedpush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss | dim_384_cosine_ndcg@10 |
|---|---|---|---|
| -1 | -1 | - | 0.7432 |
| 1.0 | 9 | - | 0.7747 |
| 1.1212 | 10 | 0.5749 | - |
| 2.0 | 18 | - | 0.7759 |
| 2.2424 | 20 | 0.3087 | - |
| 3.0 | 27 | - | 0.7814 |
| 3.3636 | 30 | 0.2328 | - |
| 4.0 | 36 | - | 0.7854 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
BAAI/bge-small-en-v1.5