Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper • 1908.10084 • Published • 12
This is a sentence-transformers model finetuned from microsoft/mpnet-base on the wsc_queries_with_negatives dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'MPNetModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AL3110/mpnet-wsc-finetuned-triplet")
# Run inference
queries = [
"What should I do if I get injured while working from my home office?",
]
documents = [
'You are expected to report the injury via the Incident Reporting Form within 48 hours of the occurrence.',
"Yes, your manager can approve workflows in your queue by using the 'switch to' option in WorklistPlus.",
'Participating countries/regions include Bangladesh, China, Guam, Hong Kong, Macau, Pakistan, Philippines, Singapore, Sri Lanka, and Thailand. Australia and New Zealand have a separate shutdown period.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9211, 0.9136, 0.9134]])
query, answer, and negative_answer| query | answer | negative_answer | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| query | answer | negative_answer |
|---|---|---|
What is Oracle's policy on drugs and alcohol in the workplace? |
Oracle maintains a strict policy prohibiting the use, possession, sale, or transfer of illegal drugs at any time while conducting company business. Reporting to work under the influence of alcohol or illegal drugs is also prohibited. |
After discussing with your manager, you can modify your WSC in HCM by navigating to: Me >> Journeys >> Request Workspace Category Journey >> Select Workspace Category Details form >> Complete the details and Submit. |
If I change my WSC status from Assigned to Remote, do I need to raise another transaction to change it back after a certain period? |
No, you do not need to initiate another transaction unless you need to modify your WSC again in the future. The change has no end date. |
This category applies to employees who have access to workspaces within their local office but do not have a permanently designated space. |
What should I do if my manager asks me to work remotely or flex? |
First, have a conversation with your manager to finalize the arrangement and start date. Then, log into HCM to submit a transaction to change your workspace category. You can track its status on WorklistPlus. |
No, managers are not authorized to issue letters directly on behalf of Oracle. They can provide a personal testimony, but it cannot be on Oracle letterhead or imply it is from the company. |
TripletLoss with these parameters:{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
query, answer, and negative_answer| query | answer | negative_answer | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| query | answer | negative_answer |
|---|---|---|
Can I work from a different country? |
Working from another country is generally not permitted unless approved under specific circumstances, as Oracle requires employees to live and work in the country where they are paid. You should refer to the ‘Living and Working in Payroll Country Policy’ for details. |
You will receive a notification from WorklistPlus with the reason for rejection. You should discuss the reason with your manager or the relevant team (HR, RE&F) and then reinitiate the workflow from Me >> Journeys >> Completed. |
When is the JAPAC Year-End Break for 2024? |
The Oracle JAPAC Year-End Break is from December 26 to December 31, 2024. |
The system is currently configured to allow these changes to be initiated only by the employee. |
What is the definition of the 'Flexible' workspace category? |
This category applies to employees who have access to workspaces within their local office but do not have a permanently designated space. |
No, there is no end date. Once your WSC is approved, it remains active until you or your manager initiate another change. |
TripletLoss with these parameters:{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
eval_strategy: epochlearning_rate: 2e-05num_train_epochs: 5warmup_ratio: 0.1fp16: Trueload_best_model_at_end: Truebatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 8per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 5max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss | Validation Loss |
|---|---|---|---|
| 1.0 | 5 | - | 5.2933 |
| 2.0 | 10 | 5.2335 | 5.3693 |
| 3.0 | 15 | - | 5.4665 |
| 4.0 | 20 | 4.8863 | 5.5259 |
| 5.0 | 25 | - | 5.5477 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Base model
microsoft/mpnet-base