--- language: en license: apache-2.0 library_name: transformers base_model: microsoft/MiniLM-L12-H384-uncased model_name: cross-encoder-MiniLM-L12-Hinge source: https://github.com/xpmir/cross-encoders paper: http://arxiv.org/abs/2603.03010 tags: - cross-encoder - sequence-classification - tensorboard datasets: - msmarco pipeline_tag: text-classification --- # cross-encoder-MiniLM-L12-Hinge [![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010) [![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders) [![GitHub](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/xpmir/cross-encoders) This model is a cross-encoder based on `microsoft/MiniLM-L12-H384-uncased`. It was trained on Ms-Marco using loss `hingeLoss` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details. ### Contents - [Model Description](#model-description) - [Usage](#usage) - [Evals](#evaluations) ## Model Description This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE). - **Training Data:** MS MARCO Passage - **Language:** English - **Loss** hingeLoss Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml). ## Usage Quick Start: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained("xpmir/cross-encoder-MiniLM-L12-Hinge") model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-MiniLM-L12-Hinge") features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Evaluations We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`. | dataset | RR@10 | nDCG@10 | |:-------------------|:----------|:----------| | msmarco_dev | 38.68 | 45.16 | | trec2019 | 97.67 | 73.42 | | trec2020 | 95.06 | 73.72 | | fever | 78.87 | 79.00 | | arguana | 22.46 | 33.27 | | climate_fever | 26.81 | 20.05 | | dbpedia | 74.03 | 43.09 | | fiqa | 44.61 | 36.41 | | hotpotqa | 85.90 | 68.09 | | nfcorpus | 56.50 | 33.72 | | nq | 51.79 | 56.76 | | quora | 68.98 | 72.31 | | scidocs | 27.61 | 15.34 | | scifact | 67.59 | 70.06 | | touche | 65.41 | 33.09 | | trec_covid | 89.35 | 69.05 | | robust04 | 71.52 | 49.26 | | lotte_writing | 66.06 | 57.90 | | lotte_recreation | 61.23 | 55.32 | | lotte_science | 45.44 | 37.67 | | lotte_technology | 52.88 | 44.83 | | lotte_lifestyle | 71.60 | 62.20 | | **Mean In Domain** | **77.14** | **64.10** | | **BEIR 13** | **58.45** | **48.48** | | **LoTTE (OOD)** | **61.46** | **51.20** |