ligeng-zhu-nv commited on
Commit
05d024f
·
verified ·
1 Parent(s): 0cabd4a

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ base_model:
7
+ - Qwen/Qwen3-VL-4B-Thinking
8
+ pipeline_tag: image-text-to-text
9
+ tags:
10
+ - visual-grounding
11
+ - multimodal
12
+ - qwen3-vl
13
+ - reinforcement-learning
14
+ - grpo
15
+ ---
16
+
17
+ # EGM-Qwen3-VL-4B
18
+
19
+ <p align="center">
20
+ <a href="https://nvlabs.github.io/EGM">[Project Page]</a> &nbsp;
21
+ <a href="https://github.com/NVlabs/EGM">[Code]</a> &nbsp;
22
+ </p>
23
+
24
+ ## Model Summary
25
+
26
+ **EGM-Qwen3-VL-4B** is an efficient visual grounding model from the [EGM (Efficient Visual Grounding Language Models)](https://nvlabs.github.io/EGM) family. It is built on top of [Qwen3-VL-4B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-4B-Thinking) and trained with a two-stage pipeline: supervised fine-tuning (SFT) followed by reinforcement learning (RL) using GRPO (Group Relative Policy Optimization).
27
+
28
+ EGM demonstrates that by increasing test-time computation, small vision-language models can **outperform much larger models** in visual grounding tasks while being significantly faster at inference.
29
+
30
+ ## Key Results
31
+
32
+ - **90.9 average IoU** on the RefCOCO benchmark (vs. 87.2 for the base Qwen3-VL-4B-Thinking)
33
+ - **+3.7 IoU improvement** over the base model
34
+ - Outperforms Qwen3-VL-235B-A22B-Instruct (88.2 avg IoU) while being dramatically faster
35
+
36
+ ### RefCOCO Benchmark Results
37
+
38
+ | Model | RefCOCO val | RefCOCO test-A | RefCOCO test-B | RefCOCO+ val | RefCOCO+ test-A | RefCOCO+ test-B | RefCOCOg val | RefCOCOg test | Avg |
39
+ |---|---|---|---|---|---|---|---|---|---|
40
+ | Qwen3-VL-4B-Thinking | 90.0 | 92.7 | 85.6 | 85.2 | 89.5 | 79.3 | 87.0 | 87.7 | 87.2 |
41
+ | **EGM-Qwen3-VL-4B** | **93.5** | **95.1** | **90.0** | **89.7** | **93.1** | **84.9** | **90.4** | **90.8** | **90.9** |
42
+
43
+ ## How It Works
44
+
45
+ VLMs of different sizes often share the same visual encoder. Small models fall behind large models primarily due to a gap in **text understanding** capabilities — 62.8% of small model errors stem from complex prompts with multiple relational descriptions. EGM mitigates this gap by generating many mid-quality tokens (from small models) to match the performance of large VLMs that produce fewer but more expensive tokens.
46
+
47
+ ### Training Pipeline
48
+
49
+ 1. **SFT Stage**: A proprietary VLM generates detailed chain-of-thought reasoning steps for visual grounding training data. The base model is fine-tuned on this data. The SFT checkpoint is available as [nvidia/EGM-4B-SFT](https://huggingface.co/nvidia/EGM-4B-SFT).
50
+ 2. **RL Stage**: GRPO is applied with a reward function combining IoU and task success metrics, further improving grounding accuracy.
51
+
52
+ ## Quickstart
53
+
54
+ ### Download
55
+
56
+ ```bash
57
+ pip install -U huggingface_hub
58
+ huggingface-cli download nvidia/EGM-4B --local-dir ./models/EGM-4B
59
+ ```
60
+
61
+ ### Evaluation
62
+
63
+ ```bash
64
+ pip install sglang==0.5.5
65
+
66
+ export BASE_DIR=$(pwd)
67
+ export MODEL_PATH="${BASE_DIR}/models/EGM-4B"
68
+ export DATA_JSON="${BASE_DIR}/data/EGM_Datasets/metadata/eval/refcoco+_testA.jsonl"
69
+ export OUTPUT_DIR="${BASE_DIR}/result/"
70
+ export BASE_IMG_DIR="${BASE_DIR}"
71
+
72
+ cd verl
73
+ bash scripts/sglang_infer.sh
74
+ ```
75
+
76
+ ## Model Architecture
77
+
78
+ | Component | Details |
79
+ |---|---|
80
+ | Architecture | Qwen3VLForConditionalGeneration |
81
+ | Text Hidden Size | 2560 |
82
+ | Text Layers | 36 |
83
+ | Attention Heads | 32 (8 KV heads) |
84
+ | Text Intermediate Size | 9728 |
85
+ | Vision Hidden Size | 1024 |
86
+ | Vision Layers | 24 |
87
+ | Patch Size | 16 x 16 |
88
+ | Max Position Embeddings | 262,144 |
89
+ | Vocabulary Size | 151,936 |
90
+
91
+ ## Citation
92
+
93
+ ```bibtex
94
+ @article{zhan2026EGM,
95
+ author = {Zhan, Guanqi and Li, Changye and Liu, Zhijian and Lu, Yao and Wu, Yi and Han, Song and Zhu, Ligeng},
96
+ title = {EGM: Efficient Visual Grounding Language Models},
97
+ booktitle = {arXiv},
98
+ year = {2026}
99
+ }
100
+ ```
101
+
102
+ ## Acknowledgment
103
+
104
+ This repository benefits from [Qwen3-VL](https://github.com/QwenLM/Qwen3-VL), [InternVL](https://github.com/OpenGVLab/InternVL), [verl](https://github.com/volcengine/verl) and [verl-internvl](https://github.com/Weiyun1025/verl-internvl).
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ff9358cdaba7f4b406e242984e518e1743c65f619abce26f53a73517a8ecd28
3
+ size 4997642072
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e14deec41f0199909ffd5633b9fb6f3a4989e199a851ef15aa8e70fff3acceab
3
+ size 4974280088
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0194be3bb08510621893c00e8a3e1fbf5800e82c0f60074f74ebb9c9fea3f3c9
3
+ size 4924892304
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:588b15fa129ec3dc3067fe3ef98350136c5919fc41ba0d7eba394f63e49cfaf5
3
+ size 2637524984