Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: apache-2.0
|
| 4 |
+
library_name: transformers
|
| 5 |
+
base_model: bert-base-uncased
|
| 6 |
+
model_name: cross-encoder-bert-base-DistillRankNET
|
| 7 |
+
source: https://github.com/xpmir/cross-encoders
|
| 8 |
+
paper: http://arxiv.org/abs/2603.03010
|
| 9 |
+
tags:
|
| 10 |
+
- cross-encoder
|
| 11 |
+
- sequence-classification
|
| 12 |
+
- tensorboard
|
| 13 |
+
datasets:
|
| 14 |
+
- msmarco
|
| 15 |
+
pipeline_tag: text-classification
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# cross-encoder-bert-base-DistillRankNET
|
| 19 |
+
|
| 20 |
+
[](http://arxiv.org/abs/2603.03010)
|
| 21 |
+
[](https://huggingface.co/collections/xpmir/reproducing-cross-encoders)
|
| 22 |
+
[](https://github.com/xpmir/cross-encoders)
|
| 23 |
+
|
| 24 |
+
This model is a cross-encoder based on `bert-base-uncased`. It was trained on Ms-Marco using loss `distillRankNET` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details.
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
### Contents
|
| 28 |
+
- [Model Description](#model-description)
|
| 29 |
+
- [Usage](#usage)
|
| 30 |
+
- [Evals](#evaluations)
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
## Model Description
|
| 34 |
+
|
| 35 |
+
This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
|
| 36 |
+
|
| 37 |
+
- **Training Data:** MS MARCO Passage
|
| 38 |
+
- **Language:** English
|
| 39 |
+
- **Loss** distillRankNET
|
| 40 |
+
|
| 41 |
+
Training can be easily reproduced using the assiciated repository.
|
| 42 |
+
The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml).
|
| 43 |
+
|
| 44 |
+
## Usage
|
| 45 |
+
|
| 46 |
+
Quick Start:
|
| 47 |
+
```python
|
| 48 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 49 |
+
import torch
|
| 50 |
+
|
| 51 |
+
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
|
| 52 |
+
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-bert-base-DistillRankNET")
|
| 53 |
+
|
| 54 |
+
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
|
| 55 |
+
|
| 56 |
+
model.eval()
|
| 57 |
+
with torch.no_grad():
|
| 58 |
+
scores = model(**features).logits
|
| 59 |
+
print(scores)
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Evaluations
|
| 63 |
+
|
| 64 |
+
We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`.
|
| 65 |
+
|
| 66 |
+
| dataset | RR@10 | nDCG@10 |
|
| 67 |
+
|:-------------------|:----------|:----------|
|
| 68 |
+
| msmarco_dev | 36.42 | 42.84 |
|
| 69 |
+
| trec2019 | 95.74 | 74.15 |
|
| 70 |
+
| trec2020 | 94.25 | 72.10 |
|
| 71 |
+
| fever | 81.04 | 80.99 |
|
| 72 |
+
| arguana | 22.80 | 34.31 |
|
| 73 |
+
| climate_fever | 29.17 | 21.50 |
|
| 74 |
+
| dbpedia | 76.58 | 45.80 |
|
| 75 |
+
| fiqa | 43.41 | 35.34 |
|
| 76 |
+
| hotpotqa | 89.45 | 72.86 |
|
| 77 |
+
| nfcorpus | 56.85 | 34.36 |
|
| 78 |
+
| nq | 52.57 | 57.27 |
|
| 79 |
+
| quora | 76.95 | 78.94 |
|
| 80 |
+
| scidocs | 28.31 | 15.65 |
|
| 81 |
+
| scifact | 67.81 | 70.21 |
|
| 82 |
+
| touche | 63.22 | 34.36 |
|
| 83 |
+
| trec_covid | 89.83 | 68.52 |
|
| 84 |
+
| robust04 | 69.69 | 47.75 |
|
| 85 |
+
| lotte_writing | 64.88 | 55.85 |
|
| 86 |
+
| lotte_recreation | 58.11 | 52.84 |
|
| 87 |
+
| lotte_science | 43.32 | 36.06 |
|
| 88 |
+
| lotte_technology | 49.62 | 41.06 |
|
| 89 |
+
| lotte_lifestyle | 70.00 | 60.53 |
|
| 90 |
+
| **Mean In Domain** | **75.47** | **63.03** |
|
| 91 |
+
| **BEIR 13** | **59.85** | **50.01** |
|
| 92 |
+
| **LoTTE (OOD)** | **59.27** | **49.01** |
|