metadata
language: en
license: apache-2.0
library_name: transformers
base_model: bert-base-uncased
model_name: cross-encoder-bert-base-BCE
source: https://github.com/xpmir/cross-encoders
paper: http://arxiv.org/abs/2603.03010
tags:
- cross-encoder
- sequence-classification
- tensorboard
datasets:
- msmarco
pipeline_tag: text-classification
cross-encoder-bert-base-BCE
This model is a cross-encoder based on bert-base-uncased. It was trained on Ms-Marco using loss bce as part of a reproducibility paper for training cross encoders: "Reproducing and Comparing Distillation Techniques for Cross-Encoders", see the paper for more details.
Contents
Model Description
This model is intended for re-ranking the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
- Training Data: MS MARCO Passage
- Language: English
- Loss bce
Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in config.yaml.
Usage
Quick Start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-bert-base-BCE")
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
Evaluations
We provide evaluations of this cross-encoder re-ranking the top 1000 documents retrieved by naver/splade-v3-distilbert.
| dataset | RR@10 | nDCG@10 |
|---|---|---|
| msmarco_dev | 37.63 | 44.00 |
| trec2019 | 90.00 | 67.38 |
| trec2020 | 91.96 | 68.39 |
| fever | 76.49 | 77.27 |
| arguana | 21.41 | 32.09 |
| climate_fever | 33.26 | 24.32 |
| dbpedia | 71.92 | 41.65 |
| fiqa | 42.57 | 34.34 |
| hotpotqa | 86.45 | 70.63 |
| nfcorpus | 49.72 | 27.88 |
| nq | 51.49 | 56.28 |
| quora | 71.56 | 74.43 |
| scidocs | 24.84 | 13.74 |
| scifact | 63.67 | 66.02 |
| touche | 61.83 | 32.49 |
| trec_covid | 84.43 | 58.66 |
| robust04 | 66.34 | 42.61 |
| lotte_writing | 66.37 | 57.13 |
| lotte_recreation | 57.83 | 52.25 |
| lotte_science | 41.88 | 35.02 |
| lotte_technology | 50.35 | 41.56 |
| lotte_lifestyle | 68.01 | 58.36 |
| Mean In Domain | 73.20 | 59.92 |
| BEIR 13 | 56.90 | 46.91 |
| LoTTE (OOD) | 58.46 | 47.82 |