Add comprehensive model card for E2Rank

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +256 -0
README.md ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: feature-extraction
5
+ ---
6
+
7
+ # E2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker
8
+
9
+ <div align="center">
10
+ <a href="https://alibaba-nlp.github.io/E2Rank/">🤖 Website</a> |
11
+ <a href="https://huggingface.co/papers/2510.22733">📄 Hugging Face Paper</a> |
12
+ <a href="https://huggingface.co/collections/Alibaba-NLP/e2rank">🤗 Huggingface Collection</a> |
13
+ <a href="https://github.com/Alibaba-NLP/E2Rank">🔗 GitHub Repository</a>
14
+ </div>
15
+
16
+ # 📌 Introduction
17
+
18
+ We introduce $\textrm{E}^2\text{Rank}$,
19
+ meaning **E**fficient **E**mbedding-based **Rank**ing
20
+ (also meaning **Embedding-to-Rank**),
21
+ which extends a single text embedding model
22
+ to perform both high-quality retrieval and listwise reranking,
23
+ thereby achieving strong effectiveness with remarkable efficiency.
24
+
25
+ By applying cosine similarity between the query and
26
+ document embeddings as a unified ranking function, the listwise ranking prompt,
27
+ which is constructed from the original query and its candidate documents, serves
28
+ as an enhanced query enriched with signals from the top-K documents, akin to
29
+ pseudo-relevance feedback (PRF) in traditional retrieval models. This design
30
+ preserves the efficiency and representational quality of the base embedding model
31
+ while significantly improving its reranking performance.
32
+
33
+ Empirically, E2Rank achieves state-of-the-art results on the BEIR reranking benchmark
34
+ and demonstrates competitive performance on the reasoning-intensive BRIGHT benchmark,
35
+ with very low reranking latency. We also show that the ranking training process
36
+ improves embedding performance on the MTEB benchmark.
37
+ Our findings indicate that a single embedding model can effectively unify retrieval and reranking,
38
+ offering both computational efficiency and competitive ranking accuracy.
39
+
40
+ **Our work highlights the potential of single embedding models to serve as unified retrieval-reranking engines, offering a practical, efficient, and accurate alternative to complex multi-stage ranking systems.**
41
+
42
+
43
+ <div align="center">
44
+ <img src="https://github.com/Alibaba-NLP/E2Rank/raw/main/assets/cover.png" width="90%" height="auto" />
45
+ <p style="width: 70%; margin-left: auto; margin-right: auto">
46
+ <b>(a)</b> Overview of E2Rank. <b>(b)</b> Average reranking performance on the BEIR benchmark, E2Rank outperforms other baselines. <b>(c)</b> Reranking latency per query on the Covid dataset, E2Rank can achieve several times the acceleration compared with RankQwen3.
47
+ </p>
48
+ </div>
49
+
50
+ # 🚀 Quick Start
51
+
52
+ ## Model List
53
+
54
+ | Supported Task | Model Name | Size | Layers | Sequence Length | Embedding Dimension | Instruction Aware |
55
+ |-----------------------------|----------------------|------|--------|-----------------|---------------------|-------------------|
56
+ | **Embedding + Reranking** | [Alibaba-NLP/E2Rank-0.6B](https://huggingface.co/Alibaba-NLP/E2Rank-0.6B) | 0.6B | 28 | 32K | 1024 | Yes |
57
+ | **Embedding + Reranking** | [Alibaba-NLP/E2Rank-4B](https://huggingface.co/Alibaba-NLP/E2Rank-4B) | 4B | 36 | 32K | 2560 | Yes |
58
+ | **Embedding + Reranking** | [Alibaba-NLP/E2Rank-8B](https://huggingface.co/Alibaba-NLP/E2Rank-8B) | 8B | 36 | 32K | 4096 | Yes |
59
+ | Embedding Only | [Alibaba-NLP/E2Rank-0.6B-Embedding-Only](https://huggingface.co/Alibaba-NLP/E2Rank-0.6B-Embedding-Only) | 0.6B | 28 | 32K | 1024 | Yes |
60
+ | Embedding Only | [Alibaba-NLP/E2Rank-0.6B-Embedding-Only](https://huggingface.co/Alibaba-NLP/E2Rank-4B-Embedding-Only) | 4B | 36 | 32K | 2560 | Yes |
61
+ | Embedding Only | [Alibaba-NLP/E2Rank-0.6B-Embedding-Only](https://huggingface.co/Alibaba-NLP/E2Rank-8B-Embedding-Only) | 8B | 36 | 32K | 4096 | Yes |
62
+
63
+
64
+ > **Note**:
65
+ > - `Embedding Only` indicates that the model is trained only with the contrastive learning and support embedding tasks, while `Embedding + Reranking` indicates the **full E2Rank model** trained with both embedding and reranking objectives (for more details, please refer to the [paper](https://arxiv.org/abs/2510.22733)).
66
+ > - `Instruction Aware` notes whether the model supports customizing the input instruction according to different tasks.
67
+
68
+ ## Usage
69
+
70
+ ### Embedding Model
71
+
72
+ The usage of E2Rank as an embedding model is similar to [Qwen3-Embedding](https://github.com/QwenLM/Qwen3-Embedding). The only difference is that Qwen3-Embedding will automatically append an EOS token, while E2Rank requires users to manually append the special token `<|endoftext|>` at the end of each input text.
73
+
74
+ <details>
75
+ <summary><b>Transformers Usage</b></summary>
76
+
77
+ ```python
78
+ # Requires transformers>=4.51.0
79
+ import torch
80
+ import torch.nn.functional as F
81
+
82
+ from torch import Tensor
83
+ from transformers import AutoTokenizer, AutoModel
84
+
85
+
86
+ def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
87
+ left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
88
+ if left_padding:
89
+ return last_hidden_states[:, -1]
90
+ else:
91
+ sequence_lengths = attention_mask.sum(dim=1) - 1
92
+ batch_size = last_hidden_states.shape[0]
93
+ return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
94
+
95
+
96
+ def get_detailed_instruct(task_description: str, query: str) -> str:
97
+ return f'Instruct: {task_description}
98
+ Query:{query}'
99
+
100
+ # Each query must come with a one-sentence instruction that describes the task
101
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
102
+
103
+ queries = [
104
+ get_detailed_instruct(task, 'What is the capital of China?'),
105
+ get_detailed_instruct(task, 'Explain gravity')
106
+ ]
107
+ # No need to add instruction for retrieval documents
108
+ documents = [
109
+ "The capital of China is Beijing.",
110
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
111
+ ]
112
+ input_texts = queries + documents
113
+ input_texts = [t + "<|endoftext|>" for t in input_texts]
114
+
115
+ tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/E2Rank-0.6B', padding_side='left')
116
+ model = AutoModel.from_pretrained('Alibaba-NLP/E2Rank-0.6B')
117
+
118
+ max_length = 8192
119
+
120
+ # Tokenize the input texts
121
+ batch_dict = tokenizer(
122
+ input_texts,
123
+ padding=True,
124
+ truncation=True,
125
+ max_length=max_length,
126
+ return_tensors="pt",
127
+ )
128
+ batch_dict.to(model.device)
129
+ with torch.no_grad():
130
+ outputs = model(**batch_dict)
131
+ embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
132
+
133
+ # normalize embeddings
134
+ embeddings = F.normalize(embeddings, p=2, dim=1)
135
+ scores = (embeddings[:2] @ embeddings[2:].T)
136
+
137
+ print(scores.tolist())
138
+ # [[0.5950675010681152, 0.030417663976550102], [0.061970409005880356, 0.562691330909729]]
139
+ ```
140
+ </details>
141
+
142
+
143
+ ### Reranking
144
+
145
+ For using E2Rank as a reranker, you only need to perform additional processing on the query by adding (part of) the docs that needs to be reranked to the *listwise prompt*, while the rest is the same as using the embedding model.
146
+
147
+ <details>
148
+ <summary><b>Transformers Usage</b></summary>
149
+
150
+ ```python
151
+ # Requires transformers>=4.51.0
152
+ import torch
153
+ import torch.nn.functional as F
154
+
155
+ from torch import Tensor
156
+ from transformers import AutoTokenizer, AutoModel
157
+
158
+
159
+ tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/E2Rank-0.6B', padding_side='left')
160
+ model = AutoModel.from_pretrained('Alibaba-NLP/E2Rank-0.6B')
161
+
162
+
163
+ def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
164
+ left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
165
+ if left_padding:
166
+ return last_hidden_states[:, -1]
167
+ else:
168
+ sequence_lengths = attention_mask.sum(dim=1) - 1
169
+ batch_size = last_hidden_states.shape[0]
170
+ return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
171
+
172
+
173
+ def get_listwise_prompt(task_description: str, query: str, documents: list[str], num_input_docs: int = 20) -> str:
174
+ input_docs = documents[:num_input_docs]
175
+ input_docs = "
176
+ ".join([f"[{i}] {doc}" for i, doc in enumerate(input_docs, start=1)])
177
+ messages = [{
178
+ "role": "user",
179
+ "content": f'{task_description}
180
+ Documents:
181
+ {input_docs}Search Query:{query}'
182
+ }]
183
+ text = tokenizer.apply_chat_template(
184
+ messages,
185
+ tokenize=False,
186
+ add_generation_prompt=True,
187
+ enable_thinking=False,
188
+ )
189
+ return text
190
+
191
+ task = 'Given a web search query and some relevant documents, rerank the documents that answer the query:'
192
+
193
+ queries = [
194
+ 'What is the capital of China?',
195
+ 'Explain gravity'
196
+ ]
197
+
198
+ # No need to add instruction for retrieval documents
199
+ documents = [
200
+ "The capital of China is Beijing.",
201
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
202
+ ]
203
+ documents = [doc + "<|endoftext|>" for doc in documents]
204
+
205
+ pseudo_queries = [
206
+ get_listwise_prompt(task, queries[0], documents),
207
+ get_listwise_prompt(task, queries[1], documents)
208
+ ] # no need to add the EOS token here
209
+
210
+ input_texts = pseudo_queries + documents
211
+
212
+
213
+ max_length = 8192
214
+ # Tokenize the input texts
215
+ batch_dict = tokenizer(
216
+ input_texts,
217
+ padding=True,
218
+ truncation=True,
219
+ max_length=max_length,
220
+ return_tensors="pt",
221
+ )
222
+ batch_dict.to(model.device)
223
+ with torch.no_grad():
224
+ outputs = model(**batch_dict)
225
+ embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
226
+
227
+ # normalize embeddings
228
+ embeddings = F.normalize(embeddings, p=2, dim=1)
229
+ scores = (embeddings[:2] @ embeddings[2:].T)
230
+
231
+ print(scores.tolist())
232
+ # [[0.8513513207435608, 0.24268491566181183], [0.33154672384262085, 0.7923378944396973]]
233
+ ```
234
+ </details>
235
+
236
+ ### End-to-end search
237
+
238
+ Since E2Rank extends a single text embedding model to perform both high-quality retrieval and listwise reranking, you can directly use it to build an end-to-end search system. By reusing the embeddings computed during the retrieval stage, E2Rank only need to compute the pseudo query's embedding and can efficiently rerank the retrieved documents with minimal additional computational overhead.
239
+
240
+ Example code is coming soon.
241
+
242
+ # 🚩 Citation
243
+
244
+ If this work is helpful, please kindly cite as:
245
+
246
+ ```bibtext
247
+ @misc{liu2025e2rank,
248
+ title={E2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker},
249
+ author={Qi Liu and Yanzhao Zhang and Mingxin Li and Dingkun Long and Pengjun Xie and Jiaxin Mao},
250
+ year={2025},
251
+ eprint={2510.22733},
252
+ archivePrefix={arXiv},
253
+ primaryClass={cs.CL},
254
+ url={https://arxiv.org/abs/2510.22733},
255
+ }
256
+ ```