--- language: en license: apache-2.0 library_name: transformers base_model: jhu-clsp/ettin-encoder-32m model_name: cross-encoder-ettin-32m-DistillRankNET source: https://github.com/xpmir/cross-encoders paper: http://arxiv.org/abs/2603.03010 tags: - cross-encoder - sequence-classification - tensorboard datasets: - msmarco pipeline_tag: text-classification --- # cross-encoder-ettin-32m-DistillRankNET [![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010) [![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders) [![GitHub](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/xpmir/cross-encoders) This model is a cross-encoder based on `jhu-clsp/ettin-encoder-32m`. It was trained on Ms-Marco using loss `distillRankNET` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details. ### Contents - [Model Description](#model-description) - [Usage](#usage) - [Evals](#evaluations) ## Model Description This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE). - **Training Data:** MS MARCO Passage - **Language:** English - **Loss** distillRankNET Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml). ## Usage Quick Start: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained("xpmir/cross-encoder-ettin-32m-DistillRankNET") model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-ettin-32m-DistillRankNET") features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Evaluations We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`. | dataset | RR@10 | nDCG@10 | |:-------------------|:----------|:----------| | msmarco_dev | 29.69 | 35.29 | | trec2019 | 91.86 | 62.04 | | trec2020 | 85.57 | 63.47 | | fever | 70.41 | 71.33 | | arguana | 8.61 | 13.20 | | climate_fever | 16.04 | 11.98 | | dbpedia | 61.21 | 34.43 | | fiqa | 32.94 | 25.37 | | hotpotqa | 74.34 | 57.33 | | nfcorpus | 40.43 | 23.10 | | nq | 38.18 | 42.81 | | quora | 72.61 | 73.97 | | scidocs | 21.50 | 11.66 | | scifact | 51.45 | 54.28 | | touche | 64.88 | 31.23 | | trec_covid | 88.83 | 64.72 | | robust04 | 52.38 | 31.19 | | lotte_writing | 59.75 | 50.70 | | lotte_recreation | 48.66 | 43.92 | | lotte_science | 38.10 | 32.33 | | lotte_technology | 42.30 | 34.81 | | lotte_lifestyle | 59.83 | 50.72 | | **Mean In Domain** | **69.04** | **53.60** | | **BEIR 13** | **49.34** | **39.65** | | **LoTTE (OOD)** | **50.17** | **40.61** |