Upload folder using huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
library_name: transformers
|
| 6 |
+
base_model:
|
| 7 |
+
- Qwen/Qwen3-VL-8B-Thinking
|
| 8 |
+
pipeline_tag: image-text-to-text
|
| 9 |
+
tags:
|
| 10 |
+
- visual-grounding
|
| 11 |
+
- multimodal
|
| 12 |
+
- qwen3-vl
|
| 13 |
+
- reinforcement-learning
|
| 14 |
+
- grpo
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# EGM-Qwen3-VL-8B
|
| 18 |
+
|
| 19 |
+
<p align="center">
|
| 20 |
+
<a href="https://nvlabs.github.io/EGM">[Project Page]</a>
|
| 21 |
+
<a href="https://github.com/NVlabs/EGM">[Code]</a>
|
| 22 |
+
</p>
|
| 23 |
+
|
| 24 |
+
## Model Summary
|
| 25 |
+
|
| 26 |
+
**EGM-Qwen3-VL-8B** is the flagship model of the [EGM (Efficient Visual Grounding Language Models)](https://nvlabs.github.io/EGM) family. It is built on top of [Qwen3-VL-8B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-8B-Thinking) and trained with a two-stage pipeline: supervised fine-tuning (SFT) followed by reinforcement learning (RL) using GRPO (Group Relative Policy Optimization).
|
| 27 |
+
|
| 28 |
+
EGM demonstrates that by increasing test-time computation, small vision-language models can **outperform much larger models** in visual grounding tasks while being significantly faster at inference.
|
| 29 |
+
|
| 30 |
+
## Key Results
|
| 31 |
+
|
| 32 |
+
- **91.4 average IoU** on the RefCOCO benchmark (vs. 87.8 for the base Qwen3-VL-8B-Thinking)
|
| 33 |
+
- **+3.6 IoU improvement** over the base model
|
| 34 |
+
- **Outperforms Qwen3-VL-235B-A22B-Instruct** (88.2 avg IoU) and **Qwen3-VL-235B-A22B-Thinking** (90.7 avg IoU)
|
| 35 |
+
- **5.9x faster** inference than Qwen3-VL-235B (737ms vs 4,320ms average latency)
|
| 36 |
+
- **18.9x faster** than Qwen3-VL-235B-Thinking
|
| 37 |
+
|
| 38 |
+
### RefCOCO Benchmark Results
|
| 39 |
+
|
| 40 |
+
| Model | RefCOCO val | RefCOCO test-A | RefCOCO test-B | RefCOCO+ val | RefCOCO+ test-A | RefCOCO+ test-B | RefCOCOg val | RefCOCOg test | Avg |
|
| 41 |
+
|---|---|---|---|---|---|---|---|---|---|
|
| 42 |
+
| Qwen3-VL-8B-Thinking | 91.0 | 92.5 | 86.6 | 86.2 | 91.2 | 80.5 | 87.8 | 88.6 | 87.8 |
|
| 43 |
+
| **EGM-Qwen3-VL-8B** | **93.9** | **95.6** | **91.2** | **90.5** | **93.5** | **86.3** | **90.8** | **91.4** | **91.4** |
|
| 44 |
+
| Qwen3-VL-235B-A22B-Instruct | 90.4 | 94.6 | 82.2 | 86.4 | 92.1 | 78.5 | 90.5 | 90.5 | 88.2 |
|
| 45 |
+
| Qwen3-VL-235B-A22B-Thinking | 93.4 | 94.1 | 90.6 | 89.5 | 91.4 | 85.2 | 90.4 | 90.5 | 90.7 |
|
| 46 |
+
|
| 47 |
+
## How It Works
|
| 48 |
+
|
| 49 |
+
VLMs of different sizes often share the same visual encoder. Small models fall behind large models primarily due to a gap in **text understanding** capabilities — 62.8% of small model errors stem from complex prompts with multiple relational descriptions. EGM mitigates this gap by generating many mid-quality tokens (from small models) to match the performance of large VLMs that produce fewer but more expensive tokens.
|
| 50 |
+
|
| 51 |
+
### Training Pipeline
|
| 52 |
+
|
| 53 |
+
1. **SFT Stage**: A proprietary VLM generates detailed chain-of-thought reasoning steps for visual grounding training data. The base model is fine-tuned on this data. The SFT checkpoint is available as [nvidia/EGM-8B-SFT](https://huggingface.co/nvidia/EGM-8B-SFT).
|
| 54 |
+
2. **RL Stage**: GRPO is applied with a reward function combining IoU and task success metrics, further improving grounding accuracy.
|
| 55 |
+
|
| 56 |
+
## Quickstart
|
| 57 |
+
|
| 58 |
+
### Download
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
pip install -U huggingface_hub
|
| 62 |
+
huggingface-cli download nvidia/EGM-8B --local-dir ./models/EGM-8B
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
### Evaluation
|
| 66 |
+
|
| 67 |
+
```bash
|
| 68 |
+
pip install sglang==0.5.5
|
| 69 |
+
|
| 70 |
+
export BASE_DIR=$(pwd)
|
| 71 |
+
export MODEL_PATH="${BASE_DIR}/models/EGM-8B"
|
| 72 |
+
export DATA_JSON="${BASE_DIR}/data/EGM_Datasets/metadata/eval/refcoco+_testA.jsonl"
|
| 73 |
+
export OUTPUT_DIR="${BASE_DIR}/result/"
|
| 74 |
+
export BASE_IMG_DIR="${BASE_DIR}"
|
| 75 |
+
|
| 76 |
+
cd verl
|
| 77 |
+
bash scripts/sglang_infer.sh
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
vLLM is also supported:
|
| 81 |
+
|
| 82 |
+
```bash
|
| 83 |
+
export BASE_DIR=$(pwd)
|
| 84 |
+
export MODEL_PATH="${BASE_DIR}/models/EGM-8B"
|
| 85 |
+
export DATA_JSON="${BASE_DIR}/data/EGM_Datasets/metadata/eval/refcoco+_testA.jsonl"
|
| 86 |
+
export OUTPUT_DIR="${BASE_DIR}/result/"
|
| 87 |
+
export BASE_IMG_DIR="${BASE_DIR}"
|
| 88 |
+
|
| 89 |
+
cd verl
|
| 90 |
+
bash scripts/vllm_infer.sh
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
## Model Architecture
|
| 94 |
+
|
| 95 |
+
| Component | Details |
|
| 96 |
+
|---|---|
|
| 97 |
+
| Architecture | Qwen3VLForConditionalGeneration |
|
| 98 |
+
| Text Hidden Size | 4096 |
|
| 99 |
+
| Text Layers | 36 |
|
| 100 |
+
| Attention Heads | 32 (8 KV heads) |
|
| 101 |
+
| Text Intermediate Size | 12,288 |
|
| 102 |
+
| Vision Hidden Size | 1152 |
|
| 103 |
+
| Vision Layers | 27 |
|
| 104 |
+
| Patch Size | 16 x 16 |
|
| 105 |
+
| Max Position Embeddings | 262,144 |
|
| 106 |
+
| Vocabulary Size | 151,936 |
|
| 107 |
+
|
| 108 |
+
## Citation
|
| 109 |
+
|
| 110 |
+
```bibtex
|
| 111 |
+
@article{zhan2026EGM,
|
| 112 |
+
author = {Zhan, Guanqi and Li, Changye and Liu, Zhijian and Lu, Yao and Wu, Yi and Han, Song and Zhu, Ligeng},
|
| 113 |
+
title = {EGM: Efficient Visual Grounding Language Models},
|
| 114 |
+
booktitle = {arXiv},
|
| 115 |
+
year = {2026}
|
| 116 |
+
}
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
## Acknowledgment
|
| 120 |
+
|
| 121 |
+
This repository benefits from [Qwen3-VL](https://github.com/QwenLM/Qwen3-VL), [InternVL](https://github.com/OpenGVLab/InternVL), [verl](https://github.com/volcengine/verl) and [verl-internvl](https://github.com/Weiyun1025/verl-internvl).
|