File size: 10,853 Bytes
727d646
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
---
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
model-index:
- name: GLM-4.6V-Flash
  results:
  - task:
      type: image-text-to-text
    dataset:
      name: Multimodal Benchmarks
      type: benchmark
    metrics:
    - name: MMBench V1.1
      type: mmbench_v1.1
      value: 86.9
    - name: MMBench V1.1 (CN)
      type: mmbench_v1.1_cn
      value: 85.9
    - name: MMStar
      type: mmstar
      value: 74.7
    - name: BLINK (Val)
      type: blink_val
      value: 65.5
    - name: MUIRBENCH
      type: muirbench
      value: 75.7
    - name: MMMU (Val)
      type: mmmu_val
      value: 71.1
    - name: MMMU_Pro
      type: mmmu_pro
      value: 60.6
    - name: VideoMMU
      type: videommu
      value: 70.1
    - name: MathVista
      type: mathvista
      value: 82.7
    - name: AI2D
      type: ai2d
      value: 89.2
    - name: DynaMath
      type: dynamath
      value: 43.7
    - name: WeMath
      type: wemath
      value: 60.0
    - name: ZeroBench (sub)
      type: zerobench_sub
      value: 22.5
    - name: MMBrowseComp
      type: mmbrowsecomp
      value: 7.1
    - name: Design2Code
      type: design2code
      value: 69.8
    - name: Flame-React-Eval
      type: flame_react_eval
      value: 78.8
    - name: OSWorld
      type: osworld
      value: 21.1
    - name: AndroidWorld
      type: androidworld
      value: 42.7
    - name: WebVoyager
      type: webvoyager
      value: 71.8
    - name: Webquest-SingleQA
      type: webquest_singleqa
      value: 75.1
    - name: Webquest-MultiQA
      type: webquest_multiqa
      value: 53.4
    - name: MMLongBench-Doc
      type: mmlongbench_doc
      value: 53.0
    - name: MMLongBench-128K
      type: mmlongbench_128k
      value: 63.4
    - name: LVBench
      type: lvbench
      value: 49.5
    - name: OCRBench
      type: ocrbench
      value: 84.7
    - name: OCR-Bench_v2 (EN)
      type: ocr_bench_v2_en
      value: 63.5
    - name: OCR-Bench_v2 (CN)
      type: ocr_bench_v2_cn
      value: 59.5
    - name: ChartQAPro
      type: chartqapro
      value: 62.6
    - name: ChartMuseum
      type: chartmuseum
      value: 49.8
    - name: CharXiv_Val-Reasoning
      type: charxiv_val_reasoning
      value: 59.6
    - name: OmniSpatial
      type: omnispatial
      value: 50.6
    - name: RefCOCO-avg (val)
      type: refcoco_avg_val
      value: 85.6
    - name: TreeBench
      type: treebench
      value: 45.7
    - name: Ref-L4-test
      type: ref_l4_test
      value: 87.7
    source:
      name: Model Card
      url: https://huggingface.co/zai-org/GLM-4.6V-Flash
---

# GLM-4.6V

<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
</div>

This model is part of the GLM-V family of models, introduced in the paper [GLM-4.1V-Thinking and GLM-4.5V: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning](https://huggingface.co/papers/2507.01006).

-   **GLM-4.6V Blog**: [https://z.ai/blog/glm-4.6v](https://z.ai/blog/glm-4.6v)
-   **Paper**: [https://huggingface.co/papers/2507.01006](https://huggingface.co/papers/2507.01006)
-   **GitHub Repository**: [https://github.com/zai-org/GLM-V](https://github.com/zai-org/GLM-V)
-   **Online Demo**: [https://chat.z.ai/](https://chat.z.ai/)
-   **API Access**: [Z.ai Open Platform](https://docs.z.ai/guides/vlm/glm-4.6v)
-   **Desktop Assistant App**: [https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App)

## Introduction

GLM-4.6V series model includes two versions: GLM-4.6V (106B), a foundation model designed for cloud and high-performance
cluster scenarios,
and GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications.
GLM-4.6V scales its context window to 128k tokens in training,
and achieves SoTA performance in visual understanding among models of similar parameter scales.
Crucially, we integrate native Function Calling capabilities for the first time.
This effectively bridges the gap between "visual perception" and "executable action"
providing a unified technical foundation for multimodal agents in real-world business scenarios.

![GLM-4.6V Benchmarks](https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/bench_46v.jpeg)

Beyond achieves SoTA performance across major multimodal benchmarks at comparable model scales. GLM-4.6V introduces
several key features:

- **Native Multimodal Function Calling** 
Enables native vision-driven tool use. Images, screenshots, and document pages can be passed directly as tool inputs without text conversion, while visual outputs (charts, search images, rendered pages) are interpreted and integrated into the reasoning chain. This closes the loop from perception to understanding to execution.

- **Interleaved Image-Text Content Generation**
Supports high-quality mixed media creation from complex multimodal inputs. GLM-4.6V takes a multimodal context—spanning documents, user inputs, and tool-retrieved images—and synthesizes coherent, interleaved image-text content tailored to the task. During generation it can actively call search and retrieval tools to gather and curate additional text and visuals, producing rich, visually grounded content.


- **Multimodal Document Understanding**
GLM-4.6V can process up to 128K tokens of multi-document or long-document input, directly interpreting richly formatted pages as images. It understands text, layout, charts, tables, and figures jointly, enabling accurate comprehension of complex, image-heavy documents without requiring prior conversion to plain text.
    
- **Frontend Replication & Visual Editing** 
Reconstructs pixel-accurate HTML/CSS from UI screenshots and supports natural-language-driven edits. It detects layout, components, and styles visually, generates clean code, and applies iterative visual modifications through simple user instructions.


**This Hugging Face repository hosts the `GLM-4.6V-Flash` model, part of the `GLM-V` series.**

## Usage

### Environment Installation

For `SGLang`:

```bash
pip install sglang>=0.5.6post1
pip install transformers>=5.0.0rc0
```

For `vLLM`:

```bash
pip install vllm>=0.12.0
pip install transformers>=5.0.0rc0
```

### Quick Start with Transformers

```python
from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
import torch

MODEL_PATH = "zai-org/GLM-4.6V-Flash"
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
            },
            {
                "type": "text",
                "text": "describe this image"
            }
        ],
    }
]
processor = AutoProcessor.from_pretrained(MODEL_PATH)
model = Glm4vMoeForConditionalGeneration.from_pretrained(
    pretrained_model_name_or_path=MODEL_PATH,
    torch_dtype="auto",
    device_map="auto",
)
inputs = processor.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)
inputs.pop("token_type_ids", None)
generated_ids = model.generate(**inputs, max_new_tokens=8192)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
print(output_text)
```

## Evaluation Settings

We primarily use vLLM as the backend for model inference. For faster and more reliable performance on video tasks, we employ SGLang. To reproduce our leaderboard results, we recommend the following decoding parameters:

+	top_p: 0.6
+	top_k: 2
+	temperature: 0.8
+	repetition_penalty: 1.1
+	max_generate_tokens: 16K

For more usage details, please refer to Our [Github](https://github.com/zai-org/GLM-V).



## Fixed and Remaining Issues

Since the open-sourcing of GLM-4.1V, we have received extensive feedback from the community and are well aware that the model still has many shortcomings. In subsequent iterations, we attempted to address several common issues — such as repetitive thinking outputs and formatting errors — which have been mitigated to some extent in this new version.

However, the model still has several limitations and issues that we will fix as soon as possible:

1. Pure text QA capabilities still have significant room for improvement. In this development cycle, our primary focus was on visual multimodal scenarios, and we will enhance pure text abilities in upcoming updates.
2. The model may still overthink or even repeat itself in certain cases, especially when dealing with complex prompts.
3. In some situations, the model may restate the answer again at the end.
4. There remain certain perception limitations, such as counting accuracy and identifying specific individuals, which still require improvement.

Thank you for your patience and understanding. We also welcome feedback and suggestions in the issue section — we will respond and improve as much as we can!

## Citation

If you use this model, please cite the following paper:

```bibtex
@misc{vteam2025glm45vglm41vthinkingversatilemultimodal,
      title={GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning}, 
      author={V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Bin Chen and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiale Zhu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingdao Liu and Mingde Xu and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Tianyu Tong and Wenkai Li and Wei Jia and Xiao Liu and Xiaohan Zhang and Xin Lyu and Xinyue Fan and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yanzi Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuting Wang and Yu Wang and Yuxuan Zhang and Zhao Xue and Zhenyu Hou and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
      year={2025},
      eprint={2507.01006},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.01006}, 
}
```