ligeng-zhu-nv commited on
Commit
30c4198
·
verified ·
1 Parent(s): 13cb346

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ base_model:
7
+ - Qwen/Qwen3-VL-4B-Thinking
8
+ pipeline_tag: image-text-to-text
9
+ tags:
10
+ - visual-grounding
11
+ - multimodal
12
+ - qwen3-vl
13
+ - supervised-fine-tuning
14
+ ---
15
+
16
+ # EGM-Qwen3-VL-4B-SFT
17
+
18
+ <p align="center">
19
+ <a href="https://nvlabs.github.io/EGM">[Project Page]</a> &nbsp;
20
+ <a href="https://github.com/NVlabs/EGM">[Code]</a> &nbsp;
21
+ </p>
22
+
23
+ ## Model Summary
24
+
25
+ **EGM-Qwen3-VL-4B-SFT** is the supervised fine-tuning (SFT) checkpoint from the first stage of the [EGM (Efficient Visual Grounding Language Models)](https://nvlabs.github.io/EGM) training pipeline. It is built on top of [Qwen3-VL-4B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-4B-Thinking).
26
+
27
+ This is an **intermediate checkpoint** intended for further reinforcement learning training. For the final model with best performance, see [nvidia/EGM-4B](https://huggingface.co/nvidia/EGM-4B).
28
+
29
+ ## Training Details
30
+
31
+ ### SFT Stage
32
+
33
+ In the SFT stage, a proprietary VLM generates detailed chain-of-thought reasoning steps for visual grounding training data. The base Qwen3-VL-4B-Thinking model is then fine-tuned on this reasoning-augmented data to learn structured visual grounding with explicit reasoning.
34
+
35
+ This SFT checkpoint serves as the initialization for the subsequent RL stage (GRPO), which yields the final [EGM-4B](https://huggingface.co/nvidia/EGM-4B) model.
36
+
37
+ ### How to Use for RL Training
38
+
39
+ ```bash
40
+ pip install -U huggingface_hub
41
+ huggingface-cli download nvidia/EGM-4B-SFT --local-dir ./models/EGM-4B-SFT
42
+ ```
43
+
44
+ Then follow the installation and RL training instructions in the [EGM repository](https://github.com/NVlabs/EGM#rl-training).
45
+
46
+ ## Model Architecture
47
+
48
+ | Component | Details |
49
+ |---|---|
50
+ | Architecture | Qwen3VLForConditionalGeneration |
51
+ | Precision | bfloat16 |
52
+ | Text Hidden Size | 2560 |
53
+ | Text Layers | 36 |
54
+ | Attention Heads | 32 (8 KV heads) |
55
+ | Text Intermediate Size | 9728 |
56
+ | Vision Hidden Size | 1024 |
57
+ | Vision Layers | 24 |
58
+ | Patch Size | 16 x 16 |
59
+ | Max Position Embeddings | 262,144 |
60
+ | Vocabulary Size | 151,936 |
61
+
62
+ ## Related Models
63
+
64
+ | Model | Description |
65
+ |---|---|
66
+ | [nvidia/EGM-4B](https://huggingface.co/nvidia/EGM-4B) | Final RL-trained model (best performance) |
67
+ | [nvidia/EGM-8B-SFT](https://huggingface.co/nvidia/EGM-8B-SFT) | SFT checkpoint for the 8B variant |
68
+ | [nvidia/EGM-8B](https://huggingface.co/nvidia/EGM-8B) | Final RL-trained 8B model |
69
+
70
+ ## Citation
71
+
72
+ ```bibtex
73
+ @article{zhan2026EGM,
74
+ author = {Zhan, Guanqi and Li, Changye and Liu, Zhijian and Lu, Yao and Wu, Yi and Han, Song and Zhu, Ligeng},
75
+ title = {EGM: Efficient Visual Grounding Language Models},
76
+ booktitle = {arXiv},
77
+ year = {2026}
78
+ }
79
+ ```
80
+
81
+ ## Acknowledgment
82
+
83
+ This repository benefits from [Qwen3-VL](https://github.com/QwenLM/Qwen3-VL), [InternVL](https://github.com/OpenGVLab/InternVL), [verl](https://github.com/volcengine/verl) and [verl-internvl](https://github.com/Weiyun1025/verl-internvl).