Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper • 1908.10084 • Published • 13
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tjohn327/scion-minilm-l6-v3")
# Run inference
sentences = [
'How many active ASes are reported as of the CIDR report mentioned in the document?',
'<title>Pervasive Internet-Wide Low-Latency Authentication</title>\n<section>C. AS as Opportunistically Trusted Entity</section>\n<content>\nEach entity in the Internet is part of at least one AS, which is under the control of a single administrative entity. This facilitates providing a common service that authenticates endpoints (e.g., using a challenge-response protocol or preinstalled keys and certificates) and issues certificates. Another advantage is the typically close relationship between an endpoint and its AS, which allows for a stronger leverage in case of misbehavior. Since it is infeasible for an endpoint to authenticate each AS by itself (there are ∼71 000 active ASes according to the CIDR report [4] ), RPKI is used as a trust anchor to authenticate ASes. RPKI resource issuers assign an AS a set of IP address prefixes that this AS is allowed to originate. An AS then issues short-lived certificates for its authorized IP address ranges.\n</content>',
'<title>Unknown Title</title>\n<section>\uf735.\uf731 Paths emission per unit of traffic</section>\n<content>\nThe reason is that the number of BGP paths is less than \uf735 for most AS pairs. This figure also suggests that the \uf735-greenest paths average emission differs from the greenest path emission and the n-greenest paths average emission for both beaconing algorithms. However, for every percentile, this difference in SCI-GIB is about \uf733 times less than the one in SCI-BCE. This means that the \uf735-greenest paths average emission in SCI-GIB is much closer to the greenest path emission than SCI-BCE. Also, for every percentile, the difference between the \uf735-greenest paths average emissions of the two different beaconing algorithms is \uf732 times more than the difference between their greenest path emissions. From both of these observations, we conclude that SCI-GIB is better at finding the greenest set of paths\n</content>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
val-ir-evalInformationRetrievalEvaluator| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.6294 |
| cosine_accuracy@3 | 0.8216 |
| cosine_accuracy@5 | 0.8764 |
| cosine_accuracy@10 | 0.9309 |
| cosine_precision@1 | 0.6294 |
| cosine_precision@3 | 0.2739 |
| cosine_precision@5 | 0.1755 |
| cosine_precision@10 | 0.0933 |
| cosine_recall@1 | 0.6292 |
| cosine_recall@3 | 0.821 |
| cosine_recall@5 | 0.8759 |
| cosine_recall@10 | 0.9306 |
| cosine_ndcg@10 | 0.7828 |
| cosine_mrr@10 | 0.7351 |
| cosine_map@100 | 0.7379 |
sentence_0 and sentence_1| sentence_0 | sentence_1 | |
|---|---|---|
| type | string | string |
| details |
|
|
| sentence_0 | sentence_1 |
|---|---|
What specific snippet of the resolver-recv-answer-for-client rule is presented in the document? |
|
What is the relationship between early adopters and the potential security improvements mentioned for SBAS in the document? |
|
How does the evaluation in this study focus on user-driven path control within SCION? |
|
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: stepsper_device_train_batch_size: 64per_device_eval_batch_size: 64fp16: Truemulti_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 64per_device_eval_batch_size: 64per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss | val-ir-eval_cosine_ndcg@10 |
|---|---|---|---|
| 0.1372 | 100 | - | 0.6950 |
| 0.2743 | 200 | - | 0.7313 |
| 0.4115 | 300 | - | 0.7443 |
| 0.5487 | 400 | - | 0.7573 |
| 0.6859 | 500 | 0.3862 | 0.7576 |
| 0.8230 | 600 | - | 0.7627 |
| 0.9602 | 700 | - | 0.7662 |
| 1.0 | 729 | - | 0.7709 |
| 1.0974 | 800 | - | 0.7705 |
| 1.2346 | 900 | - | 0.7718 |
| 1.3717 | 1000 | 0.2356 | 0.7747 |
| 1.5089 | 1100 | - | 0.7742 |
| 1.6461 | 1200 | - | 0.7759 |
| 1.7833 | 1300 | - | 0.7776 |
| 1.9204 | 1400 | - | 0.7807 |
| 2.0 | 1458 | - | 0.7815 |
| 2.0576 | 1500 | 0.1937 | 0.7789 |
| 2.1948 | 1600 | - | 0.7814 |
| 2.3320 | 1700 | - | 0.7819 |
| 2.4691 | 1800 | - | 0.7823 |
| 2.6063 | 1900 | - | 0.7827 |
| 2.7435 | 2000 | 0.1758 | 0.7828 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
sentence-transformers/all-MiniLM-L6-v2