File size: 3,503 Bytes
20d345a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d484fd9
20d345a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language: en
license: apache-2.0
library_name: transformers
base_model: bert-base-uncased
model_name: cross-encoder-bert-base-BCE
source: https://github.com/xpmir/cross-encoders
paper: http://arxiv.org/abs/2603.03010
tags:
- cross-encoder
- sequence-classification
- tensorboard
datasets:
- msmarco
pipeline_tag: text-classification
---

# cross-encoder-bert-base-BCE

[![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010)
[![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders)
[![GitHub](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/xpmir/cross-encoders)

This model is a cross-encoder based on `bert-base-uncased`. It was trained on Ms-Marco using loss `bce` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details.


### Contents
- [Model Description](#model-description)
- [Usage](#usage)
- [Evals](#evaluations)


## Model Description

This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).

- **Training Data:** MS MARCO Passage
- **Language:** English
- **Loss** bce

Training can be easily reproduced using the assiciated repository. 
The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml).

## Usage

Quick Start:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("xpmir/cross-encoder-bert-base-BCE")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-bert-base-BCE")

features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")

model.eval()
with torch.no_grad():
    scores = model(**features).logits
    print(scores)
```

## Evaluations

We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`.

| dataset            | RR@10     | nDCG@10   |
|:-------------------|:----------|:----------|
| msmarco_dev        | 37.63     | 44.00     |
| trec2019           | 90.00     | 67.38     |
| trec2020           | 91.96     | 68.39     |
| fever              | 76.49     | 77.27     |
| arguana            | 21.41     | 32.09     |
| climate_fever      | 33.26     | 24.32     |
| dbpedia            | 71.92     | 41.65     |
| fiqa               | 42.57     | 34.34     |
| hotpotqa           | 86.45     | 70.63     |
| nfcorpus           | 49.72     | 27.88     |
| nq                 | 51.49     | 56.28     |
| quora              | 71.56     | 74.43     |
| scidocs            | 24.84     | 13.74     |
| scifact            | 63.67     | 66.02     |
| touche             | 61.83     | 32.49     |
| trec_covid         | 84.43     | 58.66     |
| robust04           | 66.34     | 42.61     |
| lotte_writing      | 66.37     | 57.13     |
| lotte_recreation   | 57.83     | 52.25     |
| lotte_science      | 41.88     | 35.02     |
| lotte_technology   | 50.35     | 41.56     |
| lotte_lifestyle    | 68.01     | 58.36     |
| **Mean In Domain** | **73.20** | **59.92** |
| **BEIR 13**        | **56.90** | **46.91** |
| **LoTTE (OOD)**    | **58.46** | **47.82** |