Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,97 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Qwen3-VL-Reranker-2B
|
| 2 |
+
|
| 3 |
+
<p align="center">
|
| 4 |
+
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3_vl_reranker_logo.png" width="400"/>
|
| 5 |
+
<p>
|
| 6 |
+
|
| 7 |
+
## Highlights
|
| 8 |
+
|
| 9 |
+
The **Qwen3-VL-Embedding** and **Qwen3-VL-Reranker** model series are the latest additions to the Qwen family, built upon the recently open-sourced and powerful Qwen3-VL foundation model. Specifically designed for multimodal information retrieval and cross-modal understanding, this suite accepts diverse inputs including text, images, screenshots, and videos, as well as inputs containing a mixture of these modalities.
|
| 10 |
+
|
| 11 |
+
While the Embedding model generates high-dimensional vectors for broad applications like retrieval and clustering, the Reranker model is engineered to refine these results, establishing a comprehensive pipeline for state-of-the-art multimodal search.
|
| 12 |
+
|
| 13 |
+
- **Multimodal Versatility**: Both models seamlessly process inputs containing text, images, screenshots, and video within a unified framework. They achieve state-of-the-art performance across diverse multimodal tasks, including image-text retrieval, video-text matching, visual question answering (VQA), and multimodal content clustering.
|
| 14 |
+
|
| 15 |
+
- **Unified Representation Learning (Embedding)**: By leveraging the Qwen3-VL architecture, the Embedding model generates semantically rich vectors that capture both visual and textual information in a shared space. This facilitates efficient similarity computation and retrieval across different modalities.
|
| 16 |
+
|
| 17 |
+
- **High-Precision Reranking (Reranker)**: We simultaneously provide the Qwen3-VL-Reranker series to complement the embedding model. The Reranker accepts an input pair (Query, Document)—where both the query and document can consist of arbitrary single or mixed modalities—and outputs a precise relevance score. In retrieval scenarios, the Embedding and Reranker models are typically used in tandem: the embedding model handles the initial recall stage, while the reranker manages the re-ranking stage. This two-step process significantly enhances the final retrieval accuracy.
|
| 18 |
+
|
| 19 |
+
- **Exceptional Practicality**: Inheriting Qwen3-VL's multilingual capabilities, the series supports over **30** languages, making it ideal for global applications. It is highly practical for real-world scenarios, offering flexible vector dimensions, customizable instructions for specific use cases, and strong performance even with quantized models. These features allow developers to easily integrate both models into existing pipelines for applications requiring robust cross-lingual and cross-modal understanding.
|
| 20 |
+
|
| 21 |
+
**Qwen3-VL-Reranker-2B** has the following features:
|
| 22 |
+
|
| 23 |
+
- Model Type: MultiModal Rerank
|
| 24 |
+
- Supported Languages: 30+ Languages
|
| 25 |
+
- Supported Input Modalities: Text, images, screenshots, videos, and arbitrary multimodal combinations (e.g., text + image, text + video)
|
| 26 |
+
- Number of Parameters: 2B
|
| 27 |
+
- Context Length: 32k
|
| 28 |
+
|
| 29 |
+
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-VL-Embedding).
|
| 30 |
+
|
| 31 |
+
## Qwen3-VL-Embedding and Qwen3-VL-Reranker Model list
|
| 32 |
+
|
| 33 |
+
| Model | Size | Model Layers | Sequence Length | Embedding Dimension | Quantization Support | MRL Support | Instruction Aware |
|
| 34 |
+
|---|---|---|---|---|----------------------|---|---|
|
| 35 |
+
| [Qwen3-VL-Embedding-2B] | 2B | 28 | 32K | 2048 | Yes | Yes | Yes |
|
| 36 |
+
| [Qwen3-VL-Embedding-8B] | 8B | 36 | 32K | 4096 | Yes | Yes | Yes |
|
| 37 |
+
| [Qwen3-VL-Reranker-2B] | 2B | 28 | 32K | - | - | - | Yes |
|
| 38 |
+
| [Qwen3-VL-Reranker-8B] | 8B | 36 | 32K | - | - | - | Yes |
|
| 39 |
+
|
| 40 |
+
> **Note**:
|
| 41 |
+
> - `Quantization Support` indicates the supported quantization post process for the output embedding.
|
| 42 |
+
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
|
| 43 |
+
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
|
| 44 |
+
> Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
|
| 45 |
+
|
| 46 |
+
## Model Performance
|
| 47 |
+
|
| 48 |
+
We utilize retrieval task datasets from various subtasks of [MMEB-v2](https://huggingface.co/spaces/TIGER-Lab/MMEB-Leaderboard) and [MMTEB](https://huggingface.co/spaces/mteb/leaderboard) retrieval benchmarks. For visual document retrieval, we employ [JinaVDR](https://huggingface.co/collections/jinaai/jinavdr-visual-document-retrieval) and [ViDoRe v3](https://huggingface.co/blog/QuentinJG/introducing-vidore-v3) datasets. Our results demonstrate that all Qwen3-VL-Reranker models consistently outperform the base embedding model and baseline rerankers, with the 8B variant achieving the best performance across most tasks.
|
| 49 |
+
|
| 50 |
+
| Model | Size | MMEB-v2(Retrieval) - Avg | MMEB-v2(Retrieval) - Image | MMEB-v2(Retrieval) - Video | MMEB-v2(Retrieval) - VisDoc | MMTEB(Retrieval) | JinaVDR | ViDoRe(v3) |
|
| 51 |
+
|-------|------|--------------------------|----------------------------|----------------------------|------------------------------|------------------|---------|------------|
|
| 52 |
+
| Qwen3-VL-Embedding-2B | 2B | 73.6 | 74.9 | 52.1 | 80.2 | 68.1 | 71.0 | 52.9 |
|
| 53 |
+
| jina-reranker-m0 | 2B | - | 68.2 | - | 85.2 | - | 82.2 | 57.8 |
|
| 54 |
+
| Qwen3-VL-Reranker-2B | 2B | 75.1 | 73.8 | 52.1 | 83.4 | 70.0 | 80.9 | 60.8 |
|
| 55 |
+
| Qwen3-VL-Reranker-8B | 8B | 79.2 | 80.7 | 55.8 | 86.3 | 74.9 | 83.6 | 66.7 |
|
| 56 |
+
|
| 57 |
+
## Usage
|
| 58 |
+
|
| 59 |
+
- **requirements**
|
| 60 |
+
```text
|
| 61 |
+
transformers>=4.57.0
|
| 62 |
+
qwen-vl-utils>=0.0.14
|
| 63 |
+
torch==2.8.0
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### Basic Usage Example
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
from scripts.qwen3_vl_reranker import Qwen3VLReranker
|
| 70 |
+
|
| 71 |
+
# Specify the model path
|
| 72 |
+
model_name_or_path = "Qwen/Qwen3-VL-Reranker-2B"
|
| 73 |
+
|
| 74 |
+
# Initialize the Qwen3VLEmbedder model
|
| 75 |
+
model = Qwen3VLReranker(model_name_or_path=model_name_or_path)
|
| 76 |
+
# We recommend enabling flash_attention_2 for better acceleration and memory saving,
|
| 77 |
+
# model = Qwen3VLReranker(model_name_or_path=model_name_or_path, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
|
| 78 |
+
|
| 79 |
+
# Combine queries and documents into a single input list
|
| 80 |
+
|
| 81 |
+
inputs = {
|
| 82 |
+
"instruction": "Retrieval relevant image or text with user's query",
|
| 83 |
+
"query": {"text": "A woman playing with her dog on a beach at sunset."},
|
| 84 |
+
"documents": [
|
| 85 |
+
{"text": "A woman shares a joyful moment with her golden retriever on a sun-drenched beach at sunset, as the dog offers its paw in a heartwarming display of companionship and trust."},
|
| 86 |
+
{"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"},
|
| 87 |
+
{"text": "A woman shares a joyful moment with her golden retriever on a sun-drenched beach at sunset, as the dog offers its paw in a heartwarming display of companionship and trust.", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"}
|
| 88 |
+
],
|
| 89 |
+
"fps": 1.0
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
scores = model.process(inputs)
|
| 93 |
+
print(scores)
|
| 94 |
+
# [0.8408790826797485, 0.6197134852409363, 0.7778129577636719]
|
| 95 |
+
```
|
| 96 |
+
For more usage examples, please visit our [GitHub repository](https://github.com/QwenLM/Qwen3-VL-Embedding).
|
| 97 |
+
|