reproducing-cross-encoders
Collection
A set of cross-encoders trained from various backbones and losses for equal comparison • 55 items • Updated
• 3
This model is a cross-encoder based on microsoft/deberta-v3-base. It was trained on Ms-Marco using loss marginMSE as part of a reproducibility paper for training cross encoders: "Reproducing and Comparing Distillation Techniques for Cross-Encoders", see the paper for more details.
This model is intended for re-ranking the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in config.yaml.
Quick Start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-DeBERTav3-MarginMSE")
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
We provide evaluations of this cross-encoder re-ranking the top 1000 documents retrieved by naver/splade-v3-distilbert.
| dataset | RR@10 | nDCG@10 |
|---|---|---|
| msmarco_dev | 38.39 | 45.04 |
| trec2019 | 92.44 | 70.77 |
| trec2020 | 93.24 | 69.48 |
| fever | 81.01 | 80.55 |
| arguana | 15.49 | 22.96 |
| climate_fever | 25.56 | 19.95 |
| dbpedia | 73.59 | 44.10 |
| fiqa | 48.06 | 39.70 |
| hotpotqa | 85.99 | 69.71 |
| nfcorpus | 49.27 | 29.33 |
| nq | 54.72 | 59.88 |
| quora | 73.21 | 75.37 |
| scidocs | 26.94 | 15.35 |
| scifact | 62.87 | 65.14 |
| touche | 62.05 | 35.65 |
| trec_covid | 95.22 | 76.93 |
| robust04 | 66.26 | 45.22 |
| lotte_writing | 71.44 | 62.85 |
| lotte_recreation | 63.05 | 58.11 |
| lotte_science | 49.83 | 41.58 |
| lotte_technology | 58.85 | 48.99 |
| lotte_lifestyle | 77.11 | 66.26 |
| Mean In Domain | 74.69 | 61.76 |
| BEIR 13 | 58.00 | 48.82 |
| LoTTE (OOD) | 64.42 | 53.84 |
Base model
microsoft/deberta-v3-base