zerank-1-small / README.md
nielsr's picture
nielsr HF Staff
Update model card with paper details and GitHub link
4658f68 verified
|
raw
history blame
4.22 kB
metadata
base_model:
  - Qwen/Qwen3-4B
language:
  - en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: text-ranking
tags:
  - finance
  - legal
  - code
  - stem
  - medical

zeroentropy/zerank-1-small

This model, zeroentropy/zerank-1-small, is a state-of-the-art open-weight reranker. It was introduced in the paper zELO: ELO-inspired Training Method for Rerankers and Embedding Models.

Paper: zELO: ELO-inspired Training Method for Rerankers and Embedding Models

Abstract

We introduce a novel training methodology named zELO, which optimizes retrieval performance via the analysis that ranking tasks are statically equivalent to a Thurstone model. Based on the zELO method, we use unsupervised data in order train a suite of state-of-the-art open-weight reranker models: zerank-1 and zerank-1-small. These models achieve the highest retrieval scores in multiple domains, including finance, legal, code, and STEM, outperforming closed-source proprietary rerankers on both NDCG@10 and Recall. These models also demonstrate great versatility, maintaining their 0-shot performance on out-of-domain and private customer datasets. The training data included 112,000 queries and 100 documents per query, and was trained end-to-end from unannotated queries and documents in less than 10,000 H100-hours.

Code

The methodology and benchmarking framework associated with this model can be found in the zbench GitHub repository.

In search engines, rerankers are crucial for improving the accuracy of your retrieval system.

This 1.7B reranker is the smaller version of our flagship model zeroentropy/zerank-1. Though the model is over 2x smaller, it maintains nearly the same standard of performance, continuing to outperform other popular rerankers, and displaying massive accuracy gains over traditional vector search.

We release this model under the open-source Apache 2.0 license, in order to support the open-source community and push the frontier of what's possible with open-source models.

How to Use

from sentence_transformers import CrossEncoder

model = CrossEncoder("zeroentropy/zerank-1-small", trust_remote_code=True)

query_documents = [
    ("What is 2+2?", "4"),
    ("What is 2+2?", "The answer is definitely 1 million"),
]

scores = model.predict(query_documents)

print(scores)

The model can also be inferenced using ZeroEntropy's /models/rerank endpoint.

Evaluations

NDCG@10 scores between zerank-1-small and competing closed-source proprietary rerankers. Since we are evaluating rerankers, OpenAI's text-embedding-3-small is used as an initial retriever for the Top 100 candidate documents.

Task Embedding cohere-rerank-v3.5 Salesforce/Llama-rank-v1 zerank-1-small zerank-1
Code 0.678 0.724 0.694 0.730 0.754
Conversational 0.250 0.571 0.484 0.556 0.596
Finance 0.839 0.824 0.828 0.861 0.894
Legal 0.703 0.804 0.767 0.817 0.821
Medical 0.619 0.750 0.719 0.773 0.796
STEM 0.401 0.510 0.595 0.680 0.694

Comparing BM25 and Hybrid Search without and with zerank-1-small:

Description Description