Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
10
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LeoChiuu/all-MiniLM-L6-v2-negations")
# Run inference
sentences = [
'He published a history of Cornwall, New York in 1873.',
'He failed to publish a history of Cornwall, New York in 1873.',
"Salafis assert that reliance on taqlid has led to Islam 's decline.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
sentence_0, sentence_1, and label| sentence_0 | sentence_1 | label | |
|---|---|---|---|
| type | string | string | int |
| details |
|
|
|
| sentence_0 | sentence_1 | label |
|---|---|---|
The situation in Yemen was already much better than it was in Bahrain. |
The situation in Yemen was not much better than Bahrain. |
0 |
She was a member of the Gamma Theta Upsilon honour society of geography. |
She was denied membership of the Gamma Theta Upsilon honour society of mathematics. |
0 |
Which aren't small and not worth the price. |
Which are small and not worth the price. |
0 |
CosineSimilarityLoss with these parameters:{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
per_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 10multi_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 10max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss |
|---|---|---|
| 0.1034 | 500 | 0.3382 |
| 0.2068 | 1000 | 0.2112 |
| 0.3102 | 1500 | 0.1649 |
| 0.4136 | 2000 | 0.1454 |
| 0.5170 | 2500 | 0.1244 |
| 0.6203 | 3000 | 0.1081 |
| 0.7237 | 3500 | 0.0962 |
| 0.8271 | 4000 | 0.0924 |
| 0.9305 | 4500 | 0.0852 |
| 1.0339 | 5000 | 0.0812 |
| 1.1373 | 5500 | 0.0833 |
| 1.2407 | 6000 | 0.0736 |
| 1.3441 | 6500 | 0.0756 |
| 1.4475 | 7000 | 0.0665 |
| 1.5509 | 7500 | 0.0661 |
| 1.6543 | 8000 | 0.0625 |
| 1.7577 | 8500 | 0.0621 |
| 1.8610 | 9000 | 0.0593 |
| 1.9644 | 9500 | 0.054 |
| 2.0678 | 10000 | 0.0569 |
| 2.1712 | 10500 | 0.0566 |
| 2.2746 | 11000 | 0.0502 |
| 2.3780 | 11500 | 0.0516 |
| 2.4814 | 12000 | 0.0455 |
| 2.5848 | 12500 | 0.0454 |
| 2.6882 | 13000 | 0.0424 |
| 2.7916 | 13500 | 0.044 |
| 2.8950 | 14000 | 0.0376 |
| 2.9983 | 14500 | 0.0386 |
| 3.1017 | 15000 | 0.0392 |
| 3.2051 | 15500 | 0.0344 |
| 3.3085 | 16000 | 0.0348 |
| 3.4119 | 16500 | 0.0343 |
| 3.5153 | 17000 | 0.0322 |
| 3.6187 | 17500 | 0.0324 |
| 3.7221 | 18000 | 0.0278 |
| 3.8255 | 18500 | 0.0294 |
| 3.9289 | 19000 | 0.0292 |
| 4.0323 | 19500 | 0.0276 |
| 4.1356 | 20000 | 0.0285 |
| 4.2390 | 20500 | 0.026 |
| 4.3424 | 21000 | 0.0271 |
| 4.4458 | 21500 | 0.0248 |
| 4.5492 | 22000 | 0.0245 |
| 4.6526 | 22500 | 0.0253 |
| 4.7560 | 23000 | 0.022 |
| 4.8594 | 23500 | 0.0219 |
| 4.9628 | 24000 | 0.0207 |
| 5.0662 | 24500 | 0.0212 |
| 5.1696 | 25000 | 0.0218 |
| 5.2730 | 25500 | 0.0192 |
| 5.3763 | 26000 | 0.0198 |
| 5.4797 | 26500 | 0.0183 |
| 5.5831 | 27000 | 0.02 |
| 5.6865 | 27500 | 0.0176 |
| 5.7899 | 28000 | 0.0184 |
| 5.8933 | 28500 | 0.0157 |
| 5.9967 | 29000 | 0.0175 |
| 6.1001 | 29500 | 0.0175 |
| 6.2035 | 30000 | 0.0163 |
| 6.3069 | 30500 | 0.0173 |
| 6.4103 | 31000 | 0.0165 |
| 6.5136 | 31500 | 0.0152 |
| 6.6170 | 32000 | 0.0155 |
| 6.7204 | 32500 | 0.0132 |
| 6.8238 | 33000 | 0.0147 |
| 6.9272 | 33500 | 0.0145 |
| 7.0306 | 34000 | 0.014 |
| 7.1340 | 34500 | 0.0147 |
| 7.2374 | 35000 | 0.0126 |
| 7.3408 | 35500 | 0.0141 |
| 7.4442 | 36000 | 0.0127 |
| 7.5476 | 36500 | 0.0132 |
| 7.6510 | 37000 | 0.0125 |
| 7.7543 | 37500 | 0.0111 |
| 7.8577 | 38000 | 0.011 |
| 7.9611 | 38500 | 0.0125 |
| 8.0645 | 39000 | 0.0128 |
| 8.1679 | 39500 | 0.013 |
| 8.2713 | 40000 | 0.0115 |
| 8.3747 | 40500 | 0.0111 |
| 8.4781 | 41000 | 0.0108 |
| 8.5815 | 41500 | 0.012 |
| 8.6849 | 42000 | 0.0108 |
| 8.7883 | 42500 | 0.0105 |
| 8.8916 | 43000 | 0.0092 |
| 8.9950 | 43500 | 0.0115 |
| 9.0984 | 44000 | 0.0112 |
| 9.2018 | 44500 | 0.0096 |
| 9.3052 | 45000 | 0.0106 |
| 9.4086 | 45500 | 0.011 |
| 9.5120 | 46000 | 0.01 |
| 9.6154 | 46500 | 0.011 |
| 9.7188 | 47000 | 0.0097 |
| 9.8222 | 47500 | 0.0096 |
| 9.9256 | 48000 | 0.0102 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
Base model
sentence-transformers/all-MiniLM-L6-v2