Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper • 1908.10084 • Published • 13
How to use lingtrain/labse-chuvash-3 with sentence-transformers:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("lingtrain/labse-chuvash-3")
sentences = [
"Ҫак йӗри-тавра хупӑрланӑ хура пӗлӗтлӗ юр капламӗсен хыҫӗнче ҫуртсем пуррине ӑспа ҫеҫ тавҫӑрса илме пулать, вӗсенчен пӗринче, пиллӗкмӗш урамра, ҫиччӗмӗш ҫуртра, манӑн Катя тимӗр сухарисене (доктор галечӗсене) эпӗ каланӑ пек камин ҫине ӑшӑтма хурать.",
"Лишь фантастическое воображение могло представить, что где-то за этими чёрными тучами сталкивающегося снега стоят дома и в одном из них, на Пятой линии, семь, Катя кладёт твёрдые, как железо, галеты на камин, чтобы отогреть их, по моему совету.",
"«Кроткие наследуют землю и насладятся обилием мира», — говорится в Библии.",
"9. Скажи: так говорит Господь Бог: будет ли ей успех?"
]
embeddings = model.encode(sentences)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [4, 4]This is a sentence-transformers model finetuned from sentence-transformers/LaBSE. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Вӑл пӗлет: ҫак карапӑн командирӗ ҫамрӑк моряк, ӗлӗк артековец пулнӑскер, хӑйне вӗрентсе ӳстернӗ лагере асра тытса халӗ те тав туса саламлать.',
'Он уже знал, что кораблем этим командует молодой моряк-командир, сам когда-то бывший артековец и поныне хранящий благодарную память о лагере.',
'И разведчики это поняли.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
sentence_0, sentence_1, and label| sentence_0 | sentence_1 | label | |
|---|---|---|---|
| type | string | string | float |
| details |
|
|
|
| sentence_0 | sentence_1 | label |
|---|---|---|
Каяссипе каяс марри ҫинчен шухӑшланӑ ҫӗртех Петян каймалла пулнӑ, мӗншӗн тесен ачасем чылай малалла утнӑ ӗнтӗ. |
Так что, когда в страшной борьбе с совестью победа осталась все-таки на стороне Пети, а совесть была окончательно раздавлена, оказалось, что мальчики зашли уже довольно далеко. |
1.0 |
— Чавсаран? — тӗлӗнчӗ Ван-Конет. |
— Локоть? — удивился Ван-Конет. |
1.0 |
Юлашкинчен пирӗн гаубицӑсем те ӗҫе тытӑнчӗҫ. |
Наконец открыли огонь и наши гаубицы. |
1.0 |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: stepsper_device_train_batch_size: 20per_device_eval_batch_size: 20num_train_epochs: 1fp16: Truemulti_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 20per_device_eval_batch_size: 20per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss |
|---|---|---|
| 0.0069 | 500 | 0.6741 |
| 0.0137 | 1000 | 0.4247 |
| 0.0206 | 1500 | 0.3538 |
| 0.0275 | 2000 | 0.334 |
| 0.0344 | 2500 | 0.3155 |
| 0.0412 | 3000 | 0.2833 |
| 0.0481 | 3500 | 0.2689 |
| 0.0550 | 4000 | 0.2633 |
| 0.0618 | 4500 | 0.2577 |
| 0.0687 | 5000 | 0.2642 |
| 0.0756 | 5500 | 0.2484 |
| 0.0825 | 6000 | 0.237 |
| 0.0893 | 6500 | 0.2225 |
| 0.0962 | 7000 | 0.2359 |
| 0.1031 | 7500 | 0.2266 |
| 0.1099 | 8000 | 0.2222 |
| 0.1168 | 8500 | 0.2136 |
| 0.1237 | 9000 | 0.2236 |
| 0.1306 | 9500 | 0.2149 |
| 0.1374 | 10000 | 0.2199 |
| 0.1443 | 10500 | 0.206 |
| 0.1512 | 11000 | 0.216 |
| 0.1580 | 11500 | 0.2069 |
| 0.1649 | 12000 | 0.1903 |
| 0.1718 | 12500 | 0.1958 |
| 0.1786 | 13000 | 0.2076 |
| 0.1855 | 13500 | 0.2033 |
| 0.1924 | 14000 | 0.1893 |
| 0.1993 | 14500 | 0.2024 |
| 0.2061 | 15000 | 0.1873 |
| 0.2130 | 15500 | 0.1788 |
| 0.2199 | 16000 | 0.1959 |
| 0.2267 | 16500 | 0.1996 |
| 0.2336 | 17000 | 0.183 |
| 0.2405 | 17500 | 0.185 |
| 0.2474 | 18000 | 0.1752 |
| 0.2542 | 18500 | 0.1856 |
| 0.2611 | 19000 | 0.1948 |
| 0.2680 | 19500 | 0.1826 |
| 0.2748 | 20000 | 0.1672 |
| 0.2817 | 20500 | 0.1746 |
| 0.2886 | 21000 | 0.1801 |
| 0.2955 | 21500 | 0.1847 |
| 0.3023 | 22000 | 0.1673 |
| 0.3092 | 22500 | 0.1788 |
| 0.3161 | 23000 | 0.1667 |
| 0.3229 | 23500 | 0.1746 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
sentence-transformers/LaBSE