Text Classification
Transformers
TensorBoard
Safetensors
English
roberta
cross-encoder
sequence-classification
text-embeddings-inference
Instructions to use xpmir/cross-encoder-RoBERTa-BCE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use xpmir/cross-encoder-RoBERTa-BCE with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="xpmir/cross-encoder-RoBERTa-BCE")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("xpmir/cross-encoder-RoBERTa-BCE") model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-RoBERTa-BCE") - Notebooks
- Google Colab
- Kaggle
cross-encoder-RoBERTa-BCE
This model is a cross-encoder based on FacebookAI/roberta-base. It was trained on Ms-Marco using loss bce as part of a reproducibility paper for training cross encoders: "Reproducing and Comparing Distillation Techniques for Cross-Encoders", see the paper for more details.
Contents
Model Description
This model is intended for re-ranking the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
- Training Data: MS MARCO Passage
- Language: English
- Loss bce
Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in config.yaml.
Usage
Quick Start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("xpmir/cross-encoder-RoBERTa-BCE")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-RoBERTa-BCE")
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
Evaluations
We provide evaluations of this cross-encoder re-ranking the top 1000 documents retrieved by naver/splade-v3-distilbert.
| dataset | RR@10 | nDCG@10 |
|---|---|---|
| msmarco_dev | 36.25 | 42.75 |
| trec2019 | 93.24 | 65.77 |
| trec2020 | 87.72 | 63.74 |
| fever | 76.61 | 77.40 |
| arguana | 21.10 | 31.40 |
| climate_fever | 30.96 | 22.59 |
| dbpedia | 67.43 | 38.87 |
| fiqa | 45.32 | 37.70 |
| hotpotqa | 85.08 | 69.14 |
| nfcorpus | 45.98 | 26.00 |
| nq | 51.33 | 56.51 |
| quora | 72.60 | 75.70 |
| scidocs | 25.04 | 14.19 |
| scifact | 65.32 | 67.50 |
| touche | 59.10 | 34.03 |
| trec_covid | 72.75 | 55.71 |
| robust04 | 62.09 | 40.66 |
| lotte_writing | 65.39 | 56.09 |
| lotte_recreation | 59.62 | 54.72 |
| lotte_science | 44.01 | 36.15 |
| lotte_technology | 51.81 | 43.00 |
| lotte_lifestyle | 72.69 | 62.32 |
| Mean In Domain | 72.40 | 57.42 |
| BEIR 13 | 55.28 | 46.67 |
| LoTTE (OOD) | 59.27 | 48.82 |
- Downloads last month
- 9
Model tree for xpmir/cross-encoder-RoBERTa-BCE
Base model
FacebookAI/roberta-baseCollection including xpmir/cross-encoder-RoBERTa-BCE
Collection
A set of cross-encoders trained from various backbones and losses for equal comparison • 55 items • Updated • 4