Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Paper • 2010.02666 • Published
This is a sentence-transformers model finetuned from BAAI/bge-m3 on the sts_retrieval_dataset_hn_scored dataset. It maps sentences & paragraphs to a 2048-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): CustomPooler(
(ln_queries): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(ln_tokens): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(q_proj): Linear(in_features=1024, out_features=2048, bias=True)
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(o_proj): Linear(in_features=2048, out_features=1024, bias=True)
(attn_drop): Dropout(p=0.05, inplace=False)
(fusion_proj): Linear(in_features=4096, out_features=1024, bias=False)
(mlp): SwiGLU(
(gate_proj): Linear(in_features=2048, out_features=3072, bias=True)
(up_proj): Linear(in_features=2048, out_features=3072, bias=True)
(down_proj): Linear(in_features=3072, out_features=2048, bias=True)
(drop): Dropout(p=0.05, inplace=False)
)
(output_ln): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
)
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bobox/custom-pooler-marginmse-v0")
# Run inference
sentences = [
'What year was Jamukha elected Gür Khan?',
'After several battles, Jamukha was finally turned over to Temüjin by his own men in 1206.',
'Voters went to the polls in Thailand, five years after the military seized power in a coup.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 2048]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9932, 0.9922],
# [0.9932, 1.0000, 0.9952],
# [0.9922, 0.9952, 1.0000]])
sentence1, sentence2, sentence3, sentence4, sentence5, and label| sentence1 | sentence2 | sentence3 | sentence4 | sentence5 | label | |
|---|---|---|---|---|---|---|
| type | string | string | string | string | string | list |
| details |
|
|
|
|
|
|
| sentence1 | sentence2 | sentence3 | sentence4 | sentence5 | label |
|---|---|---|---|---|---|
Why is depth perception easier with 2 eyes but worse with 1? |
Depth perception is given by the brain perceiving the difference between the image of the 2 eyes. If an image is further away, there is less difference between the image the right eye sees and the image the left eye sees. The closer it is the more the image is different. Imagine a twig. Very far away you will see it the same. Very close and the right eye sees the front and right side, the left eye sees front and left side. Therefore the brain knows that is closer. With one eye we use clues. So if we see a full image of a car and a man's chest and head above it, we assume the car is Infront of the man rather than someone with no legs has been thrown in the air Infront of the car. If I show you a small cow you assume it is far away rather than a tiny cow close up. This isn't depth perception, just depth clues |
It's not really that different. Your field of vision is wider, but your eyes automatically combine the images so other than that it's similar. If I close one eye, the image is about 25% narrower but other than that not a huge difference. Depth perception only works with two eyes, but depth perception is subtle and only makes a big difference in rare cases. You can usually tell how far away things are by the context alone. If depth perception drastically altered an image, movies and photographs would look odd from a two-edged perspective. |
Because you have two eyes. When they're looking at the same thing, your brain can merge the image. When you're focusing on something different, you're actually seeing two different images. |
Exactly the same way that we can perceive depth with our vision, with parallax. Notice that when you look at something close to your face, you go cross-eyed. Your brain measures the difference in angle between your eyes. If they are pointing straight forward, then your brain knows the object is far away, if your eyes are crossed (like you're looking at your nose) then there is a large difference in angle between your eyes. Now imagine scaling this up. your eyes are only a few inches apart, and can only perceive short distances. To measure long distances like stars, we take picture of it, wait 6 months, then take another picture of it. By this point we have rotated around to the other side of the sun. these two positions are the equivalent of your two eyes. By comparing the position of the star in the pictures, we can find the difference in angle between the two vantage points the same way your brain does. For more read the wikipedia article on parallax. |
[0.5408486127853394, 0.5914487838745117, 0.4706460237503052, 0.4665601849555969] |
How do snails get their shells without being near the ocean? |
Shells don't simply come from the sea, they're grown by secreting their material. Snails grow their own shells. |
There is a sea snail that does this: URL_0 |
It's because of air bouncing around. Think about how you can hear the wind because it moves things around and even if there's nothing to move, you can hear it whistling past buildings or thumping into walls. Well, even a tiny breeze makes noise but usually it's so quiet we can't hear it or don't notice. That's where the shell comes in. The shape of the shell magnifies the sound of a tiny breeze to the point where it's loud enough to hear. So when you put your ear to a shell, you're not hearing the ocean, you're hearing tiny breezes moving in the shell. |
If i recall correctly, snails are super sensitive to touch and have relatively many pain/heat/cold receptors in thier skin. |
[0.498150110244751, 0.3957440257072449, 0.38127124309539795, 0.3972085416316986] |
How is the diameter of Mars 53% the size of Earth's but the surface area in only 38% of Earth's? |
The surface area is actually only 28% of Earth's. The forumula for the surface area of a sphere is 4 * pi * r². So if r = .53, r² = .28 |
Mars has a mean density of 3.933 g/cm compared to 5.514 g/cm3 for Earth. The radius of Mars at 2,106 mi is slightly over half of Earth's radius 3,959 mi. Thus the ratios of their volumes is 2106^3 / 3959^3 = 0.15. So since Mars is about 30% less dense than Earth and has 15% of the volume, it has about 11% of the mass. |
Mars The large canyon, Valles Marineris (Latin for "Mariner Valleys", also known as Agathadaemon in the old canal maps), has a length of 4,000 km (2,500 mi) and a depth of up to 7 km (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 km (277 mi) long and nearly 2 km (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 km (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement.[132][133] |
Timekeeping on Mars The average length of a Martian sidereal day is 24 h 37 m 22.663 s (88,642.66300 seconds based on SI units), and the length of its solar day (often called a sol) is 24 h 39 m 35.244147 s (88,775.244147 seconds). The corresponding values for Earth are 23 h 56 m 4.0916 s and 24h 00 m 00.002 s, respectively. This yields a conversion factor of 1.02749125170 days/sol. Thus Mars' solar day is only about 2.7% longer than Earth's. |
[0.4725932478904724, 0.47929924726486206, 0.42970356345176697, 0.37027764320373535] |
MarginMSELosssentence1, sentence2, sentence3, sentence4, sentence5, and label| sentence1 | sentence2 | sentence3 | sentence4 | sentence5 | label | |
|---|---|---|---|---|---|---|
| type | string | string | string | string | string | list |
| details |
|
|
|
|
|
|
| sentence1 | sentence2 | sentence3 | sentence4 | sentence5 | label |
|---|---|---|---|---|---|
If you're not clinically dead until your heart stops, does that mean all heart donors have come from people who were technically still alive? |
When determining death, the critical organ is the brain, not the heart, because the brain defines the person (character, memories, ability to do anything at all) and most damage to the brain is irreparable. So when there's massive brain damage, all that defines the person is effectively gone forever. However, when somebody is "brain dead", that person's body can still be kept biologically alive for long periods with mechanical ventilation etc. and is a perfect organ donor. Now, the difficulty of course lies in defining "brain death", but there definitely can be unambiguous cases, e.g. when someone sustains massive physical damage to the brain from a traffic accident or similar. |
To my understanding of developmental biology, the organ will always be composed of cells from the donor. The cells that would divide in your heart to replace old/dead/damaged cells would come through the mitosis of other cells also within the heart as opposed to the liver or the immune system. > The real question to me is, will re recipient still have genetic material of me inside him when he passes away? Yes, for as long as they live with that organ. |
The balance will be taken from the estate of the deceased. If the estate does not have enough to cover the balance, the debt is written off as a loss. I have heard anecdotes of debt collectors chasing after descendants for the debt, but unless they have signed as legally obligated to paying it, there is no legal standing for them to repay it. |
Blood type In transfusions of packed red blood cells, individuals with type O Rh D negative blood are often called universal donors. Those with type AB Rh D positive blood are called universal recipients. However, these terms are only generally true with respect to possible reactions of the recipient's anti-A and anti-B antibodies to transfused red blood cells, and also possible sensitization to Rh D antigens. One exception is individuals with hh antigen system (also known as the Bombay phenotype) who can only receive blood safely from other hh donors, because they form antibodies against the H antigen present on all red blood cells.[30][31] |
[0.41563937067985535, 0.4245889186859131, 0.2968587279319763, 0.3197227418422699] |
How Do We Know That Ammonites Had External Shells? |
A couple of things; first, one must not forget that the closest phylogenic lineage to the ammonites is the Nautilidae, and they have external shells to this day. Furthermore, with the exception of the complexity of the septae, the use of the shell appears homologous between the 2 lineages. Second, there have been a few instances where there was preservation of soft parts in some ammonite fossils, those specimens showed no significant soft tissue on the outside of the shell. |
In gastropods, the shell is secreted by a part of the molluscan body known as the mantle. URL_0 |
They also plot and combine measurements of geological structures in order to better understand the orientations of faults and folds in order to reconstruct the history of rock deformation in the area. |
Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is undiscernable without laboratory analysis. |
[0.45681333541870117, 0.26675254106521606, 0.23898926377296448, 0.2671601176261902] |
Does a dead animal taste different than an alive animal? |
It depends on where the piece you want to eat is located. An hour is enough for livor mortis to take place (movement of blood based on gravity). Based on this, you might get a very bloody piece or a 'dry' one. Other factors come into play so it's a definite yes in terms of taste difference. Note, that the difference might be very subtle in some animals. |
Dead things can make you sick or kill you. Dead things may be full of harmful bacteria or they may have died from something that could kill you also. Carrion eaters have evolved to be resistant to these sorts of things. It is a specialization. |
They are overlaid with colourless music played live , and consist of vague , almost deliberately sparse orchestral sounds . |
Yes. It is natural and Yes it is influenced. Imho. Slap stick Chevy Chase / Chris Farley stuff will always hit you in the basic physical comedy bone. Other things that are context dependant of course depend on you understanding the content. How is a spice girl joke funny if you don't know who they are? |
[0.4751628339290619, 0.39276906847953796, 0.2803378701210022, 0.2605991065502167] |
MarginMSELosseval_strategy: epochper_device_train_batch_size: 128per_device_eval_batch_size: 128gradient_accumulation_steps: 4learning_rate: 0.0001weight_decay: 0.04num_train_epochs: 0.15warmup_steps: 0.15fp16: Truedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 128per_device_eval_batch_size: 128gradient_accumulation_steps: 4eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 0.0001weight_decay: 0.04adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 0.15max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: Nonewarmup_ratio: Nonewarmup_steps: 0.15log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Trueenable_jit_checkpoint: Falsesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseuse_cpu: Falseseed: 42data_seed: Nonebf16: Falsefp16: Truebf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: -1ddp_backend: Nonedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonedisable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Nonegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Truepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_for_metrics: []eval_do_concat_batches: Trueauto_find_batch_size: Falsefull_determinism: Falseddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueuse_cache: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss | Validation Loss |
|---|---|---|---|
| 0.0731 | 50 | 1226.4617 | - |
| 0.1463 | 100 | 83.9058 | - |
| 0.1506 | 103 | - | 24.1453 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{hofstätter2021improving,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
year={2021},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
Base model
BAAI/bge-m3