SentenceTransformer based on sucharush/e5_stem_finetuned

This is a sentence-transformers model finetuned from sucharush/e5_stem_finetuned. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sucharush/e5_stem_finetuned
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Stergios-Konstantinidis/MNLP_M3_tokenizer_tuned_pos_neg_2")
# Run inference
sentences = [
    "Question: who won the 2002 us open men's tennis championship, Answer: 2002 US Open (tennis) Both Lleyton Hewitt and Venus Williams were unsuccessful in their title defences, Hewitt being defeated in the semifinals by Andre Agassi and Venus being defeated in the final by her younger sister Serena. It was the third of four consecutive Grand Slam titles won by Serena, all won by defeating Venus in the final. Pete Sampras, runner-up in 2001, won his fifth US Open title, and his 14th and final Grand Slam title, defeating his rival Agassi in the final in four sets.",
    'Question: who played the artful dodger in the original oliver, Answer: Jack Wild Jack Wild (30 September 1952 – 1 March 2006) was an English actor and singer, best known for his debut role as the Artful Dodger in Oliver!, (1968) for which he received an Academy Award nomination for Best Supporting Actor as well as Golden Globe and BAFTA nominations.',
    'Question: what do the pilgrims have to do with thanksgiving, Answer: Thanksgiving (United States) The event that Americans commonly call the "First Thanksgiving" was celebrated by the Pilgrims after their first harvest in the New World in October 1621.[5] This feast lasted three days, and—as accounted by attendee Edward Winslow[6]—it was attended by 90 Native Americans and 53 Pilgrims.[7] The New England colonists were accustomed to regularly celebrating "thanksgivings"—days of prayer thanking God for blessings such as military victory or the end of a drought.[8]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 66,654 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string int
    details
    • min: 17 tokens
    • mean: 185.19 tokens
    • max: 512 tokens
    • min: 16 tokens
    • mean: 184.06 tokens
    • max: 511 tokens
    • 0: ~50.00%
    • 1: ~50.00%
  • Samples:
    sentence_0 sentence_1 label
    [{'from': 'human', 'value': 'How do Cepheid variables serve as tools for measuring astronomical distances?'}, {'from': 'gpt', 'value': 'Cepheid variables are stars that exhibit a direct relationship between their pulsation period and intrinsic luminosity. This property allows them to function as "standard candles" in astronomy. By observing the pulsation period of a Cepheid variable, we can determine its luminosity. Since the brightness of a star decreases with increasing distance, we can use the known luminosity of Cepheid variables to calculate the distance to these stars and, by extension, other celestial objects. This method is crucial for estimating distances within our galaxy and even in neighboring galaxies.'}] [{'from': 'human', 'value': 'How do Cepheid variables serve as tools for measuring astronomical distances?'}, {'from': 'gpt', 'value': 'Cepheid variables are stars that exhibit a direct relationship between their pulsation period and intrinsic luminosity. This property allows them to function as "standard candles" in astronomy. By observing the pulsation period of a Cepheid variable, we can determine its luminosity. Since the brightness of a star decreases with increasing distance, we can use the known luminosity of Cepheid variables to calculate the distance to these stars and, by extension, other celestial objects. This method is crucial for estimating distances within our galaxy and even in neighboring galaxies.'}] 1
    [{'from': 'human', 'value': 'If a current of 5 amperes (A) passing through a circuit generates 435 watts (W) of power, what is the resistance of the circuit?'}, {'from': 'gpt', 'value': 'The formula for power (P) in an electrical circuit is:\n\nP = I^2 * R\n\nwhere:\n\nP is power in watts (W)\nI is current in amperes (A)\nR is resistance in ohms (Ω)\n\nWe are given that the current (I) is 5 A and the power (P) is 435 W. We can use the formula to find the resistance (R):\n\nR = P / I^2\nR = 435 W / 5 A^2\nR = 17.4 Ω\n\nTherefore, the resistance of the circuit is 17.4 ohms.\n####\nThe resistance of the circuit is 17.4 ohms (Ω).'}] Question: where does the last name anthony come from, Answer: Anthony (given name) Anthony or Antony is a masculine given name, derived from the Antonii, a gens (Roman family name) to which Mark Antony (Marcus Antonius) belonged. There was an ancient tradition that the Antonii gens were Heracleidae, being descendants of Anton, a son of Heracles. (Plut. Ant. 4) 0
    Question: where did skin in the game come from, Answer: Skin in the game (phrase) The origin of the phrase is unknown.[1] Question: where did skin in the game come from, Answer: Skin in the game (phrase) The origin of the phrase is unknown.[1] 1
  • Loss: ContrastiveTensionLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 14
  • per_device_eval_batch_size: 14
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 14
  • per_device_eval_batch_size: 14
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.1050 500 11.0503
0.2100 1000 10.1883
0.3151 1500 9.7122
0.4201 2000 9.71
0.5251 2500 9.7154
0.6301 3000 9.7117
0.7351 3500 9.706
0.8402 4000 9.7122
0.9452 4500 9.7079

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.52.4
  • PyTorch: 2.6.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveTensionLoss

@inproceedings{carlsson2021semantic,
    title={Semantic Re-tuning with Contrastive Tension},
    author={Fredrik Carlsson and Amaru Cuba Gyllensten and Evangelia Gogoulou and Erik Ylip{"a}{"a} Hellqvist and Magnus Sahlgren},
    booktitle={International Conference on Learning Representations},
    year={2021},
    url={https://openreview.net/forum?id=Ov_sMNau-PF}
}
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Stergios-Konstantinidis/MNLP_M3_tokenizer_tuned_pos_neg_2

Finetuned
(5)
this model

Paper for Stergios-Konstantinidis/MNLP_M3_tokenizer_tuned_pos_neg_2