File size: 3,584 Bytes
0f0dab6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language: en
license: apache-2.0
library_name: transformers
base_model: google/electra-base-discriminator
model_name: cross-encoder-ELECTRA-DistillRankNET
source: https://github.com/xpmir/cross-encoders
paper: http://arxiv.org/abs/2603.03010
tags:
- cross-encoder
- sequence-classification
- tensorboard
datasets:
- msmarco
pipeline_tag: text-classification
---

# cross-encoder-ELECTRA-DistillRankNET

[![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010)
[![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders)
[![GitHub](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/xpmir/cross-encoders)

This model is a cross-encoder based on `google/electra-base-discriminator`. It was trained on Ms-Marco using loss `distillRankNET` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details.


### Contents
- [Model Description](#model-description)
- [Usage](#usage)
- [Evals](#evaluations)


## Model Description

This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).

- **Training Data:** MS MARCO Passage
- **Language:** English
- **Loss** distillRankNET

Training can be easily reproduced using the assiciated repository. 
The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml).

## Usage

Quick Start:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("google/electra-base-discriminator")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-ELECTRA-DistillRankNET")

features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")

model.eval()
with torch.no_grad():
    scores = model(**features).logits
    print(scores)
```

## Evaluations

We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`.

| dataset            | RR@10     | nDCG@10   |
|:-------------------|:----------|:----------|
| msmarco_dev        | 37.50     | 44.08     |
| trec2019           | 100.00    | 77.88     |
| trec2020           | 95.00     | 74.82     |
| fever              | 79.89     | 80.03     |
| arguana            | 15.87     | 24.53     |
| climate_fever      | 22.70     | 17.38     |
| dbpedia            | 77.35     | 47.24     |
| fiqa               | 46.89     | 38.68     |
| hotpotqa           | 86.53     | 67.52     |
| nfcorpus           | 55.78     | 34.33     |
| nq                 | 55.00     | 60.02     |
| quora              | 77.07     | 79.32     |
| scidocs            | 27.87     | 15.98     |
| scifact            | 62.64     | 65.76     |
| touche             | 68.69     | 35.77     |
| trec_covid         | 87.97     | 70.20     |
| robust04           | 70.36     | 49.20     |
| lotte_writing      | 70.07     | 61.35     |
| lotte_recreation   | 62.44     | 56.76     |
| lotte_science      | 47.24     | 40.02     |
| lotte_technology   | 55.93     | 47.04     |
| lotte_lifestyle    | 74.60     | 64.90     |
| **Mean In Domain** | **77.50** | **65.59** |
| **BEIR 13**        | **58.79** | **48.98** |
| **LoTTE (OOD)**    | **63.44** | **53.21** |