You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3 on the sts_retrieval_dataset_hn_scored dataset. It maps sentences & paragraphs to a 2048-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 2048 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
  (1): CustomPooler(
    (ln_queries): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
    (ln_tokens): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
    (q_proj): Linear(in_features=1024, out_features=2048, bias=True)
    (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
    (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
    (o_proj): Linear(in_features=2048, out_features=1024, bias=True)
    (attn_drop): Dropout(p=0.05, inplace=False)
    (fusion_proj): Linear(in_features=4096, out_features=1024, bias=False)
    (mlp): SwiGLU(
      (gate_proj): Linear(in_features=2048, out_features=3072, bias=True)
      (up_proj): Linear(in_features=2048, out_features=3072, bias=True)
      (down_proj): Linear(in_features=3072, out_features=2048, bias=True)
      (drop): Dropout(p=0.05, inplace=False)
    )
    (output_ln): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("bobox/custom-pooler-marginmse-v0")
# Run inference
sentences = [
    'What year was Jamukha elected Gür Khan?',
    'After several battles, Jamukha was finally turned over to Temüjin by his own men in 1206.',
    'Voters went to the polls in Thailand, five years after the military seized power in a coup.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 2048]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9932, 0.9922],
#         [0.9932, 1.0000, 0.9952],
#         [0.9922, 0.9952, 1.0000]])

Training Details

Training Dataset

sts_retrieval_dataset_hn_scored

  • Dataset: sts_retrieval_dataset_hn_scored at 38e5944
  • Size: 350,048 training samples
  • Columns: sentence1, sentence2, sentence3, sentence4, sentence5, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 sentence3 sentence4 sentence5 label
    type string string string string string list
    details
    • min: 6 tokens
    • mean: 23.11 tokens
    • max: 72 tokens
    • min: 14 tokens
    • mean: 120.21 tokens
    • max: 275 tokens
    • min: 14 tokens
    • mean: 108.98 tokens
    • max: 285 tokens
    • min: 9 tokens
    • mean: 119.11 tokens
    • max: 442 tokens
    • min: 10 tokens
    • mean: 120.5 tokens
    • max: 427 tokens
    • size: 4 elements
  • Samples:
    sentence1 sentence2 sentence3 sentence4 sentence5 label
    Why is depth perception easier with 2 eyes but worse with 1? Depth perception is given by the brain perceiving the difference between the image of the 2 eyes. If an image is further away, there is less difference between the image the right eye sees and the image the left eye sees. The closer it is the more the image is different. Imagine a twig. Very far away you will see it the same. Very close and the right eye sees the front and right side, the left eye sees front and left side. Therefore the brain knows that is closer. With one eye we use clues. So if we see a full image of a car and a man's chest and head above it, we assume the car is Infront of the man rather than someone with no legs has been thrown in the air Infront of the car. If I show you a small cow you assume it is far away rather than a tiny cow close up. This isn't depth perception, just depth clues It's not really that different. Your field of vision is wider, but your eyes automatically combine the images so other than that it's similar. If I close one eye, the image is about 25% narrower but other than that not a huge difference. Depth perception only works with two eyes, but depth perception is subtle and only makes a big difference in rare cases. You can usually tell how far away things are by the context alone. If depth perception drastically altered an image, movies and photographs would look odd from a two-edged perspective. Because you have two eyes. When they're looking at the same thing, your brain can merge the image. When you're focusing on something different, you're actually seeing two different images. Exactly the same way that we can perceive depth with our vision, with parallax. Notice that when you look at something close to your face, you go cross-eyed. Your brain measures the difference in angle between your eyes. If they are pointing straight forward, then your brain knows the object is far away, if your eyes are crossed (like you're looking at your nose) then there is a large difference in angle between your eyes. Now imagine scaling this up. your eyes are only a few inches apart, and can only perceive short distances. To measure long distances like stars, we take picture of it, wait 6 months, then take another picture of it. By this point we have rotated around to the other side of the sun. these two positions are the equivalent of your two eyes. By comparing the position of the star in the pictures, we can find the difference in angle between the two vantage points the same way your brain does. For more read the wikipedia article on parallax. [0.5408486127853394, 0.5914487838745117, 0.4706460237503052, 0.4665601849555969]
    How do snails get their shells without being near the ocean? Shells don't simply come from the sea, they're grown by secreting their material. Snails grow their own shells. There is a sea snail that does this: URL_0 It's because of air bouncing around. Think about how you can hear the wind because it moves things around and even if there's nothing to move, you can hear it whistling past buildings or thumping into walls. Well, even a tiny breeze makes noise but usually it's so quiet we can't hear it or don't notice. That's where the shell comes in. The shape of the shell magnifies the sound of a tiny breeze to the point where it's loud enough to hear. So when you put your ear to a shell, you're not hearing the ocean, you're hearing tiny breezes moving in the shell. If i recall correctly, snails are super sensitive to touch and have relatively many pain/heat/cold receptors in thier skin. [0.498150110244751, 0.3957440257072449, 0.38127124309539795, 0.3972085416316986]
    How is the diameter of Mars 53% the size of Earth's but the surface area in only 38% of Earth's? The surface area is actually only 28% of Earth's. The forumula for the surface area of a sphere is 4 * pi * r². So if r = .53, r² = .28 Mars has a mean density of 3.933 g/cm compared to 5.514 g/cm3 for Earth. The radius of Mars at 2,106 mi is slightly over half of Earth's radius 3,959 mi. Thus the ratios of their volumes is 2106^3 / 3959^3 = 0.15. So since Mars is about 30% less dense than Earth and has 15% of the volume, it has about 11% of the mass. Mars The large canyon, Valles Marineris (Latin for "Mariner Valleys", also known as Agathadaemon in the old canal maps), has a length of 4,000 km (2,500 mi) and a depth of up to 7 km (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 km (277 mi) long and nearly 2 km (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 km (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement.[132][133] Timekeeping on Mars The average length of a Martian sidereal day is 24 h 37 m 22.663 s (88,642.66300 seconds based on SI units), and the length of its solar day (often called a sol) is 24 h 39 m 35.244147 s (88,775.244147 seconds). The corresponding values for Earth are 23 h 56 m 4.0916 s and 24h 00 m 00.002 s, respectively. This yields a conversion factor of 1.02749125170 days/sol. Thus Mars' solar day is only about 2.7% longer than Earth's. [0.4725932478904724, 0.47929924726486206, 0.42970356345176697, 0.37027764320373535]
  • Loss: MarginMSELoss

Evaluation Dataset

sts_retrieval_dataset_hn_scored

  • Dataset: sts_retrieval_dataset_hn_scored at 38e5944
  • Size: 7,447 evaluation samples
  • Columns: sentence1, sentence2, sentence3, sentence4, sentence5, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 sentence3 sentence4 sentence5 label
    type string string string string string list
    details
    • min: 6 tokens
    • mean: 20.66 tokens
    • max: 66 tokens
    • min: 6 tokens
    • mean: 112.32 tokens
    • max: 414 tokens
    • min: 6 tokens
    • mean: 108.45 tokens
    • max: 406 tokens
    • min: 6 tokens
    • mean: 74.89 tokens
    • max: 455 tokens
    • min: 9 tokens
    • mean: 66.3 tokens
    • max: 455 tokens
    • size: 4 elements
  • Samples:
    sentence1 sentence2 sentence3 sentence4 sentence5 label
    If you're not clinically dead until your heart stops, does that mean all heart donors have come from people who were technically still alive? When determining death, the critical organ is the brain, not the heart, because the brain defines the person (character, memories, ability to do anything at all) and most damage to the brain is irreparable. So when there's massive brain damage, all that defines the person is effectively gone forever. However, when somebody is "brain dead", that person's body can still be kept biologically alive for long periods with mechanical ventilation etc. and is a perfect organ donor. Now, the difficulty of course lies in defining "brain death", but there definitely can be unambiguous cases, e.g. when someone sustains massive physical damage to the brain from a traffic accident or similar. To my understanding of developmental biology, the organ will always be composed of cells from the donor. The cells that would divide in your heart to replace old/dead/damaged cells would come through the mitosis of other cells also within the heart as opposed to the liver or the immune system. > The real question to me is, will re recipient still have genetic material of me inside him when he passes away? Yes, for as long as they live with that organ. The balance will be taken from the estate of the deceased. If the estate does not have enough to cover the balance, the debt is written off as a loss. I have heard anecdotes of debt collectors chasing after descendants for the debt, but unless they have signed as legally obligated to paying it, there is no legal standing for them to repay it. Blood type In transfusions of packed red blood cells, individuals with type O Rh D negative blood are often called universal donors. Those with type AB Rh D positive blood are called universal recipients. However, these terms are only generally true with respect to possible reactions of the recipient's anti-A and anti-B antibodies to transfused red blood cells, and also possible sensitization to Rh D antigens. One exception is individuals with hh antigen system (also known as the Bombay phenotype) who can only receive blood safely from other hh donors, because they form antibodies against the H antigen present on all red blood cells.[30][31] [0.41563937067985535, 0.4245889186859131, 0.2968587279319763, 0.3197227418422699]
    How Do We Know That Ammonites Had External Shells? A couple of things; first, one must not forget that the closest phylogenic lineage to the ammonites is the Nautilidae, and they have external shells to this day. Furthermore, with the exception of the complexity of the septae, the use of the shell appears homologous between the 2 lineages. Second, there have been a few instances where there was preservation of soft parts in some ammonite fossils, those specimens showed no significant soft tissue on the outside of the shell. In gastropods, the shell is secreted by a part of the molluscan body known as the mantle. URL_0 They also plot and combine measurements of geological structures in order to better understand the orientations of faults and folds in order to reconstruct the history of rock deformation in the area. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is undiscernable without laboratory analysis. [0.45681333541870117, 0.26675254106521606, 0.23898926377296448, 0.2671601176261902]
    Does a dead animal taste different than an alive animal? It depends on where the piece you want to eat is located. An hour is enough for livor mortis to take place (movement of blood based on gravity). Based on this, you might get a very bloody piece or a 'dry' one. Other factors come into play so it's a definite yes in terms of taste difference. Note, that the difference might be very subtle in some animals. Dead things can make you sick or kill you. Dead things may be full of harmful bacteria or they may have died from something that could kill you also. Carrion eaters have evolved to be resistant to these sorts of things. It is a specialization. They are overlaid with colourless music played live , and consist of vague , almost deliberately sparse orchestral sounds . Yes. It is natural and Yes it is influenced. Imho. Slap stick Chevy Chase / Chris Farley stuff will always hit you in the basic physical comedy bone. Other things that are context dependant of course depend on you understanding the content. How is a spice girl joke funny if you don't know who they are? [0.4751628339290619, 0.39276906847953796, 0.2803378701210022, 0.2605991065502167]
  • Loss: MarginMSELoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • gradient_accumulation_steps: 4
  • learning_rate: 0.0001
  • weight_decay: 0.04
  • num_train_epochs: 0.15
  • warmup_steps: 0.15
  • fp16: True

All Hyperparameters

Click to expand
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.0001
  • weight_decay: 0.04
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 0.15
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_ratio: None
  • warmup_steps: 0.15
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • enable_jit_checkpoint: False
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • use_cpu: False
  • seed: 42
  • data_seed: None
  • bf16: False
  • fp16: True
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: -1
  • ddp_backend: None
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • auto_find_batch_size: False
  • full_determinism: False
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • use_cache: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss
0.0731 50 1226.4617 -
0.1463 100 83.9058 -
0.1506 103 - 24.1453

Framework Versions

  • Python: 3.12.13
  • Sentence Transformers: 5.3.0
  • Transformers: 5.0.0
  • PyTorch: 2.10.0+cu128
  • Accelerate: 1.13.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MarginMSELoss

@misc{hofstätter2021improving,
    title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
    author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
    year={2021},
    eprint={2010.02666},
    archivePrefix={arXiv},
    primaryClass={cs.IR}
}
Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bobox/custom-pooler-marginmse-v0

Base model

BAAI/bge-m3
Finetuned
(421)
this model

Dataset used to train bobox/custom-pooler-marginmse-v0

Papers for bobox/custom-pooler-marginmse-v0