File size: 3,566 Bytes
99eb142
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language: en
license: apache-2.0
library_name: transformers
base_model: microsoft/deberta-v3-base
model_name: cross-encoder-DeBERTav3-DistillRankNET
source: https://github.com/xpmir/cross-encoders
paper: http://arxiv.org/abs/2603.03010
tags:
- cross-encoder
- sequence-classification
- tensorboard
datasets:
- msmarco
pipeline_tag: text-classification
---

# cross-encoder-DeBERTav3-DistillRankNET

[![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010)
[![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders)
[![GitHub](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/xpmir/cross-encoders)

This model is a cross-encoder based on `microsoft/deberta-v3-base`. It was trained on Ms-Marco using loss `distillRankNET` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details.


### Contents
- [Model Description](#model-description)
- [Usage](#usage)
- [Evals](#evaluations)


## Model Description

This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).

- **Training Data:** MS MARCO Passage
- **Language:** English
- **Loss** distillRankNET

Training can be easily reproduced using the assiciated repository. 
The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml).

## Usage

Quick Start:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-DeBERTav3-DistillRankNET")

features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")

model.eval()
with torch.no_grad():
    scores = model(**features).logits
    print(scores)
```

## Evaluations

We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`.

| dataset            | RR@10     | nDCG@10   |
|:-------------------|:----------|:----------|
| msmarco_dev        | 35.30     | 41.91     |
| trec2019           | 94.65     | 74.18     |
| trec2020           | 93.58     | 70.05     |
| fever              | 82.83     | 81.97     |
| arguana            | 13.59     | 21.15     |
| climate_fever      | 26.82     | 19.27     |
| dbpedia            | 72.24     | 42.56     |
| fiqa               | 42.94     | 35.84     |
| hotpotqa           | 78.51     | 60.35     |
| nfcorpus           | 47.19     | 28.16     |
| nq                 | 52.10     | 57.12     |
| quora              | 71.72     | 74.00     |
| scidocs            | 25.04     | 14.36     |
| scifact            | 63.12     | 65.74     |
| touche             | 68.90     | 34.59     |
| trec_covid         | 89.07     | 76.15     |
| robust04           | 70.29     | 46.92     |
| lotte_writing      | 67.04     | 57.94     |
| lotte_recreation   | 61.21     | 55.99     |
| lotte_science      | 48.10     | 40.20     |
| lotte_technology   | 55.99     | 46.36     |
| lotte_lifestyle    | 74.51     | 64.85     |
| **Mean In Domain** | **74.51** | **62.05** |
| **BEIR 13**        | **56.47** | **47.02** |
| **LoTTE (OOD)**    | **62.86** | **52.04** |