SentenceTransformer based on microsoft/mpnet-base

This is a sentence-transformers model finetuned from microsoft/mpnet-base on the wsc_queries_with_negatives dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AL3110/mpnet-wsc-finetuned-triplet")
# Run inference
queries = [
    "What should I do if I get injured while working from my home office?",
]
documents = [
    'You are expected to report the injury via the Incident Reporting Form within 48 hours of the occurrence.',
    "Yes, your manager can approve workflows in your queue by using the 'switch to' option in WorklistPlus.",
    'Participating countries/regions include Bangladesh, China, Guam, Hong Kong, Macau, Pakistan, Philippines, Singapore, Sri Lanka, and Thailand. Australia and New Zealand have a separate shutdown period.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9211, 0.9136, 0.9134]])

Training Details

Training Dataset

wsc_queries_with_negatives

  • Dataset: wsc_queries_with_negatives at d342836
  • Size: 35 training samples
  • Columns: query, answer, and negative_answer
  • Approximate statistics based on the first 35 samples:
    query answer negative_answer
    type string string string
    details
    • min: 11 tokens
    • mean: 16.54 tokens
    • max: 30 tokens
    • min: 18 tokens
    • mean: 37.71 tokens
    • max: 58 tokens
    • min: 18 tokens
    • mean: 38.57 tokens
    • max: 71 tokens
  • Samples:
    query answer negative_answer
    What is Oracle's policy on drugs and alcohol in the workplace? Oracle maintains a strict policy prohibiting the use, possession, sale, or transfer of illegal drugs at any time while conducting company business. Reporting to work under the influence of alcohol or illegal drugs is also prohibited. After discussing with your manager, you can modify your WSC in HCM by navigating to: Me >> Journeys >> Request Workspace Category Journey >> Select Workspace Category Details form >> Complete the details and Submit.
    If I change my WSC status from Assigned to Remote, do I need to raise another transaction to change it back after a certain period? No, you do not need to initiate another transaction unless you need to modify your WSC again in the future. The change has no end date. This category applies to employees who have access to workspaces within their local office but do not have a permanently designated space.
    What should I do if my manager asks me to work remotely or flex? First, have a conversation with your manager to finalize the arrangement and start date. Then, log into HCM to submit a transaction to change your workspace category. You can track its status on WorklistPlus. No, managers are not authorized to issue letters directly on behalf of Oracle. They can provide a personal testimony, but it cannot be on Oracle letterhead or imply it is from the company.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Evaluation Dataset

wsc_queries_with_negatives

  • Dataset: wsc_queries_with_negatives at d342836
  • Size: 9 evaluation samples
  • Columns: query, answer, and negative_answer
  • Approximate statistics based on the first 9 samples:
    query answer negative_answer
    type string string string
    details
    • min: 10 tokens
    • mean: 16.33 tokens
    • max: 26 tokens
    • min: 19 tokens
    • mean: 36.11 tokens
    • max: 71 tokens
    • min: 19 tokens
    • mean: 32.78 tokens
    • max: 54 tokens
  • Samples:
    query answer negative_answer
    Can I work from a different country? Working from another country is generally not permitted unless approved under specific circumstances, as Oracle requires employees to live and work in the country where they are paid. You should refer to the ‘Living and Working in Payroll Country Policy’ for details. You will receive a notification from WorklistPlus with the reason for rejection. You should discuss the reason with your manager or the relevant team (HR, RE&F) and then reinitiate the workflow from Me >> Journeys >> Completed.
    When is the JAPAC Year-End Break for 2024? The Oracle JAPAC Year-End Break is from December 26 to December 31, 2024. The system is currently configured to allow these changes to be initiated only by the employee.
    What is the definition of the 'Flexible' workspace category? This category applies to employees who have access to workspaces within their local office but do not have a permanently designated space. No, there is no end date. Once your WSC is approved, it remains active until you or your manager initiate another change.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss
1.0 5 - 5.2933
2.0 10 5.2335 5.3693
3.0 15 - 5.4665
4.0 20 4.8863 5.5259
5.0 25 - 5.5477
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.10
  • Sentence Transformers: 5.1.0
  • Transformers: 4.55.4
  • PyTorch: 2.8.0+cpu
  • Accelerate: 1.10.0
  • Datasets: 4.0.0
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AL3110/mpnet-wsc-finetuned-triplet

Finetuned
(133)
this model

Dataset used to train AL3110/mpnet-wsc-finetuned-triplet

Papers for AL3110/mpnet-wsc-finetuned-triplet