bhavnicksm commited on
Commit
67d0b9c
·
verified ·
1 Parent(s): 4651bb0

Update model card README with v0.0.4 API

Browse files
Files changed (1) hide show
  1. README.md +29 -267
README.md CHANGED
@@ -1,296 +1,58 @@
1
  ---
2
- license: apache-2.0
3
- base_model:
4
- - Qwen/Qwen3-4B-Base
5
  tags:
6
  - tokie
7
- - transformers
8
- - sentence-transformers
9
- - sentence-similarity
10
- - feature-extraction
11
- - text-embeddings-inference
12
  ---
 
13
  <p align="center">
14
- <img src="tokie-banner.png" alt="tokie banner">
15
  </p>
16
 
17
  # Qwen3-Embedding-4B
18
 
19
- <p align="center">
20
- <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
21
- <p>
22
-
23
- ## Highlights
24
-
25
- The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
26
-
27
- **Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios.
28
-
29
- **Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
30
-
31
- **Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
32
-
33
- ## Model Overview
34
-
35
- **Qwen3-Embedding-4B** has the following features:
36
-
37
- - Model Type: Text Embedding
38
- - Supported Languages: 100+ Languages
39
- - Number of Paramaters: 4B
40
- - Context Length: 32k
41
- - Embedding Dimension: Up to 2560, supports user-defined output dimensions ranging from 32 to 2560
42
-
43
- For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
44
-
45
- ## Qwen3 Embedding Series Model list
46
-
47
- | Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
48
- |------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
49
- | Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
50
- | Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
51
- | Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
52
- | Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
53
- | Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
54
- | Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
55
-
56
- > **Note**:
57
- > - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
58
- > - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
59
- > - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
60
-
61
- ## Usage
62
-
63
- With Transformers versions earlier than 4.51.0, you may encounter the following error:
64
- ```
65
- KeyError: 'qwen3'
66
- ```
67
-
68
- ### Sentence Transformers Usage
69
-
70
- ```python
71
- # Requires transformers>=4.51.0
72
- # Requires sentence-transformers>=2.7.0
73
-
74
- from sentence_transformers import SentenceTransformer
75
-
76
- # Load the model
77
- model = SentenceTransformer("Qwen/Qwen3-Embedding-4B")
78
-
79
- # We recommend enabling flash_attention_2 for better acceleration and memory saving,
80
- # together with setting `padding_side` to "left":
81
- # model = SentenceTransformer(
82
- # "Qwen/Qwen3-Embedding-4B",
83
- # model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"},
84
- # tokenizer_kwargs={"padding_side": "left"},
85
- # )
86
-
87
- # The queries and documents to embed
88
- queries = [
89
- "What is the capital of China?",
90
- "Explain gravity",
91
- ]
92
- documents = [
93
- "The capital of China is Beijing.",
94
- "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
95
- ]
96
-
97
- # Encode the queries and documents. Note that queries benefit from using a prompt
98
- # Here we use the prompt called "query" stored under `model.prompts`, but you can
99
- # also pass your own prompt via the `prompt` argument
100
- query_embeddings = model.encode(queries, prompt_name="query")
101
- document_embeddings = model.encode(documents)
102
-
103
- # Compute the (cosine) similarity between the query and document embeddings
104
- similarity = model.similarity(query_embeddings, document_embeddings)
105
- print(similarity)
106
- # tensor([[0.7534, 0.1147],
107
- # [0.0320, 0.6258]])
108
- ```
109
-
110
- ### Transformers Usage
111
-
112
- ```python
113
- # Requires transformers>=4.51.0
114
- import torch
115
- import torch.nn.functional as F
116
 
117
- from torch import Tensor
118
- from transformers import AutoTokenizer, AutoModel
119
 
120
-
121
- def last_token_pool(last_hidden_states: Tensor,
122
- attention_mask: Tensor) -> Tensor:
123
- left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
124
- if left_padding:
125
- return last_hidden_states[:, -1]
126
- else:
127
- sequence_lengths = attention_mask.sum(dim=1) - 1
128
- batch_size = last_hidden_states.shape[0]
129
- return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
130
-
131
-
132
- def get_detailed_instruct(task_description: str, query: str) -> str:
133
- return f'Instruct: {task_description}\nQuery:{query}'
134
-
135
- # Each query must come with a one-sentence instruction that describes the task
136
- task = 'Given a web search query, retrieve relevant passages that answer the query'
137
-
138
- queries = [
139
- get_detailed_instruct(task, 'What is the capital of China?'),
140
- get_detailed_instruct(task, 'Explain gravity')
141
- ]
142
- # No need to add instruction for retrieval documents
143
- documents = [
144
- "The capital of China is Beijing.",
145
- "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
146
- ]
147
- input_texts = queries + documents
148
-
149
- tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
150
- model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B')
151
-
152
- # We recommend enabling flash_attention_2 for better acceleration and memory saving.
153
- # model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
154
-
155
- max_length = 8192
156
-
157
- # Tokenize the input texts
158
- batch_dict = tokenizer(
159
- input_texts,
160
- padding=True,
161
- truncation=True,
162
- max_length=max_length,
163
- return_tensors="pt",
164
- )
165
- batch_dict.to(model.device)
166
- outputs = model(**batch_dict)
167
- embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
168
-
169
- # normalize embeddings
170
- embeddings = F.normalize(embeddings, p=2, dim=1)
171
- scores = (embeddings[:2] @ embeddings[2:].T)
172
- print(scores.tolist())
173
- # [[0.7534257769584656, 0.1146894246339798], [0.03198453038930893, 0.6258305311203003]]
174
  ```
175
 
176
- ### vLLM Usage
177
-
178
  ```python
179
- # Requires vllm>=0.8.5
180
- import torch
181
- import vllm
182
- from vllm import LLM
183
-
184
- def get_detailed_instruct(task_description: str, query: str) -> str:
185
- return f'Instruct: {task_description}\nQuery:{query}'
186
-
187
- # Each query must come with a one-sentence instruction that describes the task
188
- task = 'Given a web search query, retrieve relevant passages that answer the query'
189
-
190
- queries = [
191
- get_detailed_instruct(task, 'What is the capital of China?'),
192
- get_detailed_instruct(task, 'Explain gravity')
193
- ]
194
- # No need to add instruction for retrieval documents
195
- documents = [
196
- "The capital of China is Beijing.",
197
- "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
198
- ]
199
- input_texts = queries + documents
200
 
201
- model = LLM(model="Qwen/Qwen3-Embedding-4B", task="embed")
202
-
203
- outputs = model.embed(input_texts)
204
- embeddings = torch.tensor([o.outputs.embedding for o in outputs])
205
- scores = (embeddings[:2] @ embeddings[2:].T)
206
- print(scores.tolist())
207
- # [[0.7525103688240051, 0.1143278032541275], [0.030893627554178238, 0.6239761114120483]]
208
  ```
209
 
210
- 📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
211
-
212
- ### Text Embeddings Inference (TEI) Usage
213
 
214
- You can either run / deploy TEI on NVIDIA GPUs as:
215
-
216
- ```bash
217
- docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7.2 --model-id Qwen/Qwen3-Embedding-4B --dtype float16
218
  ```
219
 
220
- Or on CPU devices as:
 
221
 
222
- ```bash
223
- docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2 --model-id Qwen/Qwen3-Embedding-4B --dtype float16
 
224
  ```
225
 
226
- And then, generate the embeddings sending a HTTP POST request as:
227
-
228
- ```bash
229
- curl http://localhost:8080/embed \
230
- -X POST \
231
- -d '{"inputs": ["Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: What is the capital of China?", "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: Explain gravity"]}' \
232
- -H "Content-Type: application/json"
233
- ```
234
 
235
- ## Evaluation
 
236
 
237
- ### MTEB (Multilingual)
238
 
239
- | Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS |
240
- |----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:|
241
- | NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10|
242
- | GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33|
243
- | BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12|
244
- | multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81|
245
- | gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61|
246
- | gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98|
247
- | text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68|
248
- | Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80|
249
- | gemini-embedding-exp-03-07 | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40|
250
- | **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17|
251
- | **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86|
252
- | **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** |
253
 
254
- > **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025.
255
 
256
- ### MTEB (Eng v2)
257
 
258
- | MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. |
259
- |--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:|
260
- | multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 |
261
- | NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 |
262
- | GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 |
263
- | gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 |
264
- | stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 |
265
- | gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 |
266
- | gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | **59.39** | **87.7** | 48.59 | 64.35 | 85.29 | **38.28** |
267
- | **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 |
268
- | **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | **88.72** | 34.39 |
269
- | **Qwen3-Embedding-8B** | 8B | **75.22** | **68.71** | **90.43** | 58.57 | 87.52 | **51.56** | **69.44** | 88.58 | 34.83 |
270
-
271
- ### C-MTEB (MTEB Chinese)
272
-
273
- | C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS |
274
- |------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------|
275
- | multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 |
276
- | bge-multilingual-gemma2 | 9B | 67.64 |68.52 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 |
277
- | gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 |
278
- | gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 |
279
- | ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | **85.98** | **72.86** | 76.97 | **63.92** |
280
- | **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 |
281
- | **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 |
282
- | **Qwen3-Embedding-8B** | 8B | **73.84** | **75.00** | **76.97** | **80.08** | 84.23 | 66.99 | **78.21** | 63.53 |
283
-
284
-
285
- ## Citation
286
-
287
- If you find our work helpful, feel free to give us a cite.
288
-
289
- ```
290
- @article{qwen3embedding,
291
- title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
292
- author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
293
- journal={arXiv preprint arXiv:2506.05176},
294
- year={2025}
295
- }
296
- ```
 
1
  ---
 
 
 
2
  tags:
3
  - tokie
4
+ library_name: tokie
 
 
 
 
5
  ---
6
+
7
  <p align="center">
8
+ <img src="tokie-banner.png" alt="tokie" width="600">
9
  </p>
10
 
11
  # Qwen3-Embedding-4B
12
 
13
+ Pre-built [tokie](https://github.com/chonkie-inc/tokie) tokenizer for [Qwen/Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ ## Quick Start (Python)
 
16
 
17
+ ```bash
18
+ pip install tokie
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ```
20
 
 
 
21
  ```python
22
+ import tokie
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ tokenizer = tokie.Tokenizer.from_pretrained("tokiers/Qwen3-Embedding-4B")
25
+ encoding = tokenizer.encode("Hello, world!")
26
+ print(encoding.ids)
27
+ print(encoding.attention_mask)
 
 
 
28
  ```
29
 
30
+ ## Quick Start (Rust)
 
 
31
 
32
+ ```toml
33
+ [dependencies]
34
+ tokie = { version = "0.0.4", features = ["hf"] }
 
35
  ```
36
 
37
+ ```rust
38
+ use tokie::Tokenizer;
39
 
40
+ let tokenizer = Tokenizer::from_pretrained("tokiers/Qwen3-Embedding-4B").unwrap();
41
+ let encoding = tokenizer.encode("Hello, world!", true);
42
+ println!("{:?}", encoding.ids);
43
  ```
44
 
45
+ ## Files
 
 
 
 
 
 
 
46
 
47
+ - `tokenizer.tkz` — tokie binary format (~10x smaller, loads in ~5ms)
48
+ - `tokenizer.json` — original HuggingFace tokenizer (if available)
49
 
50
+ ## About tokie
51
 
52
+ **50x faster tokenization, 10x smaller model files, 100% accurate.**
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ tokie is a drop-in replacement for HuggingFace tokenizers, built in Rust. See [GitHub](https://github.com/chonkie-inc/tokie) for benchmarks and documentation.
55
 
56
+ ## License
57
 
58
+ MIT OR Apache-2.0 (tokie library). Original model files retain their original license from [Qwen/Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B).