SentenceTransformer based on sucharush/e5_stem_finetuned

This is a sentence-transformers model finetuned from sucharush/e5_stem_finetuned. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sucharush/e5_stem_finetuned
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Stergios-Konstantinidis/MNLP_M3_tokenizer_tuned_pos_neg_2_ls")
# Run inference
sentences = [
    'Question: what kind of beer is bud light lime, Answer: Anheuser-Busch brands Bud Light with lime flavor added. It has 116 calories per 12\xa0US\xa0fl\xa0oz serving (1,370\xa0kJ/L). Released in May 2008 with 4.2% alcohol content, the same alcohol content as Bud Light.',
    'Question: what kind of beer is bud light lime, Answer: Anheuser-Busch brands Bud Light with lime flavor added. It has 116 calories per 12\xa0US\xa0fl\xa0oz serving (1,370\xa0kJ/L). Released in May 2008 with 4.2% alcohol content, the same alcohol content as Bud Light.',
    '[{\'from\': \'human\', \'value\': \'There are 192 items that are members of set U. Of these items, some are members of set B, 59 are not members of either set A or set B, and 23 are members of both sets A and B. There are 107 members of set U that are members of set A. How many of the members of set U are members of set B?\'}, {\'from\': \'gpt\', \'value\': "Let\'s denote the number of members of set U that are members of set B as |B|. We are given the following information:\\n\\n- The total number of items in set U is 192.\\n- There are 59 items that are not members of either set A or set B.\\n- There are 23 items that are members of both sets A and B (the intersection of A and B).\\n- There are 107 members of set U that are members of set A.\\n\\nWe can use the principle of inclusion-exclusion to find the number of members of set B. The principle of inclusion-exclusion states that for any two sets A and B:\\n\\n|A ∪ B| = |A| + |B| - |A ∩ B|\\n\\nWe want to find |B|, and we can rearrange the formula to solve for |B|:\\n\\n|B| = |A ∪ B| - |A| + |A ∩ B|\\n\\nWe know that |A ∪ B| is the total number of items in set U minus the number of items that are not in either set A or B:\\n\\n|A ∪ B| = |U| - (number of items not in A or B)\\n|A ∪ B| = 192 - 59\\n|A ∪ B| = 133\\n\\nNow we can plug in the values we know into the rearranged formula:\\n\\n|B| = 133 - 107 + 23\\n|B| = 26 + 23\\n|B| = 49\\n\\nTherefore, there are 49 members of set U that are members of set B."}]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 153,400 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string int
    details
    • min: 17 tokens
    • mean: 206.37 tokens
    • max: 512 tokens
    • min: 17 tokens
    • mean: 206.37 tokens
    • max: 512 tokens
    • 1: 100.00%
  • Samples:
    sentence_0 sentence_1 label
    Question: where did the term dark ages come from, Answer: Dark Ages (historiography) The term employs traditional light-versus-darkness imagery to contrast the era's "darkness" with earlier and later periods of "light".[3] The concept of a "Dark Age" originated in the 1330s with the Italian scholar Petrarch, who regarded the post-Roman centuries as "dark" compared to the light of classical antiquity.[3][4] The phrase "Dark Age" itself derives from the Latin saeculum obscurum, originally applied by Caesar Baronius in 1602 to a tumultuous period in the 10th and 11th centuries.[5] The concept thus came to characterize the entire Middle Ages as a time of intellectual darkness between the fall of Rome and the Renaissance; this became especially popular during the 18th-century Age of Enlightenment.[3] Question: where did the term dark ages come from, Answer: Dark Ages (historiography) The term employs traditional light-versus-darkness imagery to contrast the era's "darkness" with earlier and later periods of "light".[3] The concept of a "Dark Age" originated in the 1330s with the Italian scholar Petrarch, who regarded the post-Roman centuries as "dark" compared to the light of classical antiquity.[3][4] The phrase "Dark Age" itself derives from the Latin saeculum obscurum, originally applied by Caesar Baronius in 1602 to a tumultuous period in the 10th and 11th centuries.[5] The concept thus came to characterize the entire Middle Ages as a time of intellectual darkness between the fall of Rome and the Renaissance; this became especially popular during the 18th-century Age of Enlightenment.[3] 1
    Question: when do they put up tree in rockefeller center, Answer: Rockefeller Center Christmas Tree The tree, usually a Norway spruce 69 to 100 feet (21 to 30 m) tall, has been a national tradition each year since 1933.[3] The 2017 Christmas Tree Lighting took place on November 29, 2017; the tree remains on display until January 7, 2018. Question: when do they put up tree in rockefeller center, Answer: Rockefeller Center Christmas Tree The tree, usually a Norway spruce 69 to 100 feet (21 to 30 m) tall, has been a national tradition each year since 1933.[3] The 2017 Christmas Tree Lighting took place on November 29, 2017; the tree remains on display until January 7, 2018. 1
    Question: what are the lights on the curling stone, Answer: Curling A handle is attached by a bolt running vertically through a hole in the centre of the stone. The handle allows the stone to be gripped and rotated upon release; on properly prepared ice the rotation will bend (curl) the path of the stone in the direction in which the front edge of the stone is turning, especially as the stone slows. Handles are coloured to identify each team, two popular colours in major tournaments being red and yellow. In competition, an electronic handle known as the eye on the hog may be fitted to detect hog line violations. This electronically detects whether the thrower's hand is in contact with the handle as it passes the hog line and indicates a violation by lights at the base of the handle. The eye on the hog eliminates human error and the need for hog line officials. It is mandatory in high-level national and international competition, but its cost, around US$650 each, currently puts it beyon... Question: what are the lights on the curling stone, Answer: Curling A handle is attached by a bolt running vertically through a hole in the centre of the stone. The handle allows the stone to be gripped and rotated upon release; on properly prepared ice the rotation will bend (curl) the path of the stone in the direction in which the front edge of the stone is turning, especially as the stone slows. Handles are coloured to identify each team, two popular colours in major tournaments being red and yellow. In competition, an electronic handle known as the eye on the hog may be fitted to detect hog line violations. This electronically detects whether the thrower's hand is in contact with the handle as it passes the hog line and indicates a violation by lights at the base of the handle. The eye on the hog eliminates human error and the need for hog line officials. It is mandatory in high-level national and international competition, but its cost, around US$650 each, currently puts it beyon... 1
  • Loss: ContrastiveTensionLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 20
  • per_device_eval_batch_size: 20
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0652 500 6.269
0.1304 1000 6.2663
0.1956 1500 6.2662
0.2608 2000 6.2661
0.3259 2500 6.2659
0.3911 3000 6.2658
0.4563 3500 6.2657
0.5215 4000 6.2656
0.5867 4500 6.2655
0.6519 5000 6.2655
0.7171 5500 6.2654
0.7823 6000 6.2654
0.8475 6500 6.2654
0.9126 7000 6.2654
0.9778 7500 6.2654

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.52.4
  • PyTorch: 2.6.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveTensionLoss

@inproceedings{carlsson2021semantic,
    title={Semantic Re-tuning with Contrastive Tension},
    author={Fredrik Carlsson and Amaru Cuba Gyllensten and Evangelia Gogoulou and Erik Ylip{"a}{"a} Hellqvist and Magnus Sahlgren},
    booktitle={International Conference on Learning Representations},
    year={2021},
    url={https://openreview.net/forum?id=Ov_sMNau-PF}
}
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Stergios-Konstantinidis/MNLP_M3_tokenizer_tuned_pos_neg_2_ls

Finetuned
(5)
this model

Paper for Stergios-Konstantinidis/MNLP_M3_tokenizer_tuned_pos_neg_2_ls