textagent
/

controlllmai ZHANGYUXUAN-zR commited on
Commit
4d0da37
·
0 Parent(s):

Duplicate from zai-org/GLM-OCR

Browse files

Co-authored-by: zR <ZHANGYUXUAN-zR@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - zh
5
+ - en
6
+ - fr
7
+ - es
8
+ - ru
9
+ - de
10
+ - ja
11
+ - ko
12
+ pipeline_tag: image-to-text
13
+ library_name: transformers
14
+ ---
15
+
16
+ # GLM-OCR
17
+
18
+ <div align="center">
19
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/logo.svg width="40%"/>
20
+ </div>
21
+ <p align="center">
22
+ 👋 Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/wechat.jpg" target="_blank">WeChat</a> and <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community
23
+ <br>
24
+ 📍 Use GLM-OCR's <a href="https://docs.z.ai/guides/vlm/glm-ocr" target="_blank">API</a>
25
+ <br>
26
+ 👉 <a href="https://github.com/zai-org/GLM-OCR" target="_blank">GLM-OCR SDK</a> Recommended
27
+ <br>
28
+ 📖 <a href="https://arxiv.org/abs/2603.10910" target="_blank"> Technical Report</a>
29
+ </p>
30
+
31
+
32
+ ## Introduction
33
+
34
+ GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It introduces Multi-Token Prediction (MTP) loss and stable full-task reinforcement learning to improve training efficiency, recognition accuracy, and generalization. The model integrates the CogViT visual encoder pre-trained on large-scale image–text data, a lightweight cross-modal connector with efficient token downsampling, and a GLM-0.5B language decoder. Combined with a two-stage pipeline of layout analysis and parallel recognition based on PP-DocLayout-V3, GLM-OCR delivers robust and high-quality OCR performance across diverse document layouts.
35
+
36
+ **Key Features**
37
+
38
+ - **State-of-the-Art Performance**: Achieves a score of 94.62 on OmniDocBench V1.5, ranking #1 overall, and delivers state-of-the-art results across major document understanding benchmarks, including formula recognition, table recognition, and information extraction.
39
+
40
+ - **Optimized for Real-World Scenarios**: Designed and optimized for practical business use cases, maintaining robust performance on complex tables, code-heavy documents, seals, and other challenging real-world layouts.
41
+
42
+ - **Efficient Inference**: With only 0.9B parameters, GLM-OCR supports deployment via vLLM, SGLang, and Ollama, significantly reducing inference latency and compute cost, making it ideal for high-concurrency services and edge deployments.
43
+
44
+ - **Easy to Use**: Fully open-sourced and equipped with a comprehensive [SDK](https://github.com/zai-org/GLM-OCR) and inference toolchain, offering simple installation, one-line invocation, and smooth integration into existing production pipelines.
45
+
46
+ ## Performance
47
+
48
+ - Document Parsing & Information Extraction
49
+
50
+ ![image](https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/docparse.png)
51
+
52
+
53
+ - Real-World Scenarios Performance
54
+
55
+ ![image](https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/realworld.png)
56
+
57
+
58
+ - Speed Test
59
+
60
+ For speed, we compared different OCR methods under identical hardware and testing conditions (single replica, single concurrency), evaluating their performance in parsing and exporting Markdown files from both image and PDF inputs. Results show GLM-OCR achieves a throughput of 1.86 pages/second for PDF documents and 0.67 images/second for images, significantly outperforming comparable models.
61
+
62
+ ![image](https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/speed.png)
63
+
64
+ ## Usage
65
+
66
+ ### Official SDK
67
+
68
+ For document parsing tasks, we strongly recommend using our [official SDK](https://github.com/zai-org/GLM-OCR).
69
+ Compared with model-only inference, the SDK integrates PP-DocLayoutV3 and provides a complete, easy-to-use pipeline for document parsing, including layout analysis and structured output generation. This significantly reduces the engineering overhead required to build end-to-end document intelligence systems.
70
+
71
+ Note that the SDK is currently designed for document parsing tasks only. For information extraction tasks, please refer to the following section and run inference directly with the model.
72
+
73
+ ### vLLM
74
+
75
+ 1. run
76
+
77
+ ```bash
78
+ pip install -U vllm --extra-index-url https://wheels.vllm.ai/nightly
79
+ ```
80
+
81
+ or using docker with:
82
+ ```
83
+ docker pull vllm/vllm-openai:nightly
84
+ ```
85
+
86
+ 2. run with:
87
+
88
+ ```bash
89
+ pip install git+https://github.com/huggingface/transformers.git
90
+ vllm serve zai-org/GLM-OCR --allowed-local-media-path / --port 8080
91
+ ```
92
+
93
+ ### SGLang
94
+
95
+
96
+ 1. using docker with:
97
+
98
+ ```bash
99
+ docker pull lmsysorg/sglang:dev
100
+ ```
101
+
102
+ or build it from source with:
103
+
104
+ ```bash
105
+ pip install git+https://github.com/sgl-project/sglang.git#subdirectory=python
106
+ ```
107
+
108
+ 2. run with:
109
+
110
+ ```bash
111
+ pip install git+https://github.com/huggingface/transformers.git
112
+ python -m sglang.launch_server --model zai-org/GLM-OCR --port 8080
113
+ ```
114
+
115
+ ### Ollama
116
+
117
+ 1. Download [Ollama](https://ollama.com/download).
118
+ 2. run with:
119
+
120
+ ```bash
121
+ ollama run glm-ocr
122
+ ```
123
+
124
+ Ollama will automatically use image file path when an image is dragged into the terminal:
125
+
126
+ ```bash
127
+ ollama run glm-ocr Text Recognition: ./image.png
128
+ ```
129
+
130
+ ### Transformers
131
+
132
+ ```
133
+ pip install git+https://github.com/huggingface/transformers.git
134
+ ```
135
+
136
+ ```python
137
+ from transformers import AutoProcessor, AutoModelForImageTextToText
138
+ import torch
139
+
140
+ MODEL_PATH = "zai-org/GLM-OCR"
141
+ messages = [
142
+ {
143
+ "role": "user",
144
+ "content": [
145
+ {
146
+ "type": "image",
147
+ "url": "test_image.png"
148
+ },
149
+ {
150
+ "type": "text",
151
+ "text": "Text Recognition:"
152
+ }
153
+ ],
154
+ }
155
+ ]
156
+ processor = AutoProcessor.from_pretrained(MODEL_PATH)
157
+ model = AutoModelForImageTextToText.from_pretrained(
158
+ pretrained_model_name_or_path=MODEL_PATH,
159
+ torch_dtype="auto",
160
+ device_map="auto",
161
+ )
162
+ inputs = processor.apply_chat_template(
163
+ messages,
164
+ tokenize=True,
165
+ add_generation_prompt=True,
166
+ return_dict=True,
167
+ return_tensors="pt"
168
+ ).to(model.device)
169
+ inputs.pop("token_type_ids", None)
170
+ generated_ids = model.generate(**inputs, max_new_tokens=8192)
171
+ output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
172
+ print(output_text)
173
+ ```
174
+
175
+ ### Prompt Limited
176
+
177
+ GLM-OCR currently supports two types of prompt scenarios:
178
+
179
+ 1. **Document Parsing** – extract raw content from documents. Supported tasks include:
180
+
181
+ ```python
182
+ {
183
+ "text": "Text Recognition:",
184
+ "formula": "Formula Recognition:",
185
+ "table": "Table Recognition:"
186
+ }
187
+ ```
188
+
189
+ 2. **Information Extraction** – extract structured information from documents. Prompts must follow a strict JSON schema. For example, to extract personal ID information:
190
+
191
+ ```python
192
+ 请按下列JSON格式输出图中信息:
193
+ {
194
+ "id_number": "",
195
+ "last_name": "",
196
+ "first_name": "",
197
+ "date_of_birth": "",
198
+ "address": {
199
+ "street": "",
200
+ "city": "",
201
+ "state": "",
202
+ "zip_code": ""
203
+ },
204
+ "dates": {
205
+ "issue_date": "",
206
+ "expiration_date": ""
207
+ },
208
+ "sex": ""
209
+ }
210
+ ```
211
+
212
+ ⚠️ Note: When using information extraction, the output must strictly adhere to the defined JSON schema to ensure downstream processing compatibility.
213
+
214
+ ## Acknowledgement
215
+
216
+ This project is inspired by the excellent work of the following projects and communities:
217
+
218
+ - [PP-DocLayout-V3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3)
219
+ - [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
220
+ - [MinerU](https://github.com/opendatalab/MinerU)
221
+
222
+ ## License
223
+
224
+ The GLM-OCR model is released under the MIT License.
225
+
226
+ The complete OCR pipeline integrates [PP-DocLayoutV3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3) for document layout analysis, which is licensed under the Apache License 2.0. Users should comply with both licenses when using this project.
227
+
228
+ ## Citation
229
+
230
+ If you find GLM-OCR useful in your research, please cite our technical report:
231
+
232
+ ```bibtex
233
+ @misc{duan2026glmocrtechnicalreport,
234
+ title={GLM-OCR Technical Report},
235
+ author={Shuaiqi Duan and Yadong Xue and Weihan Wang and Zhe Su and Huan Liu and Sheng Yang and Guobing Gan and Guo Wang and Zihan Wang and Shengdong Yan and Dexin Jin and Yuxuan Zhang and Guohong Wen and Yanfeng Wang and Yutao Zhang and Xiaohan Zhang and Wenyi Hong and Yukuo Cen and Da Yin and Bin Chen and Wenmeng Yu and Xiaotao Gu and Jie Tang},
236
+ year={2026},
237
+ eprint={2603.10910},
238
+ archivePrefix={arXiv},
239
+ primaryClass={cs.CL},
240
+ url={https://arxiv.org/abs/2603.10910},
241
+ }
242
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [gMASK]<sop>
2
+ {%- if tools -%}
3
+ <|system|>
4
+ # Tools
5
+
6
+ You may call one or more functions to assist with the user query.
7
+
8
+ You are provided with function signatures within <tools></tools> XML tags:
9
+ <tools>
10
+ {% for tool in tools %}
11
+ {{ tool | tojson(ensure_ascii=False) }}
12
+ {% endfor %}
13
+ </tools>
14
+
15
+ For each function call, output the function name and arguments within the following XML format:
16
+ <tool_call>{function-name}
17
+ <arg_key>{arg-key-1}</arg_key>
18
+ <arg_value>{arg-value-1}</arg_value>
19
+ <arg_key>{arg-key-2}</arg_key>
20
+ <arg_value>{arg-value-2}</arg_value>
21
+ ...
22
+ </tool_call>{%- endif -%}
23
+ {%- macro visible_text(content) -%}
24
+ {%- if content is string -%}
25
+ {{- content }}
26
+ {%- elif content is iterable and content is not mapping -%}
27
+ {%- for item in content -%}
28
+ {%- if item is mapping and item.type == 'text' -%}
29
+ {{- item.text }}
30
+ {%- elif item is mapping and (item.type == 'image' or 'image' in item) -%}
31
+ <|begin_of_image|><|image|><|end_of_image|>
32
+ {%- elif item is mapping and (item.type == 'video' or 'video' in item) -%}
33
+ <|begin_of_video|><|video|><|end_of_video|>
34
+ {%- elif item is string -%}
35
+ {{- item }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{- content }}
40
+ {%- endif -%}
41
+ {%- endmacro -%}
42
+ {%- set ns = namespace(last_user_index=-1) %}
43
+ {%- for m in messages %}
44
+ {%- if m.role == 'user' %}
45
+ {% set ns.last_user_index = loop.index0 -%}
46
+ {%- endif %}
47
+ {%- endfor %}
48
+ {% for m in messages %}
49
+ {%- if m.role == 'user' -%}<|user|>
50
+ {% if m.content is string %}
51
+ {{ m.content }}
52
+ {%- else %}
53
+ {%- for item in m.content %}
54
+ {% if item.type == 'video' or 'video' in item %}
55
+ <|begin_of_video|><|video|><|end_of_video|>{% elif item.type == 'image' or 'image' in item %}
56
+ <|begin_of_image|><|image|><|end_of_image|>{% elif item.type == 'text' %}
57
+ {{ item.text }}
58
+ {%- endif %}
59
+ {%- endfor %}
60
+ {%- endif %}
61
+ {{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
62
+ {%- elif m.role == 'assistant' -%}
63
+ <|assistant|>
64
+ {%- set reasoning_content = '' %}
65
+ {%- set content = visible_text(m.content) %}
66
+ {%- if m.reasoning_content is string %}
67
+ {%- set reasoning_content = m.reasoning_content %}
68
+ {%- else %}
69
+ {%- if '</think>' in content %}
70
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
71
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
72
+ {%- endif %}
73
+ {%- endif %}
74
+ {%- if loop.index0 > ns.last_user_index and reasoning_content -%}
75
+ {{ '\n<think>' + reasoning_content.strip() + '</think>'}}
76
+ {%- else -%}
77
+ {{ '\n<think></think>' }}
78
+ {%- endif -%}
79
+ {%- if content.strip() -%}
80
+ {{ '\n' + content.strip() }}
81
+ {%- endif -%}
82
+ {% if m.tool_calls %}
83
+ {% for tc in m.tool_calls %}
84
+ {%- if tc.function %}
85
+ {%- set tc = tc.function %}
86
+ {%- endif %}
87
+ {{ '\n<tool_call>' + tc.name }}
88
+ {% set _args = tc.arguments %}
89
+ {% for k, v in _args.items() %}
90
+ <arg_key>{{ k }}</arg_key>
91
+ <arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
92
+ {% endfor %}
93
+ </tool_call>{% endfor %}
94
+ {% endif %}
95
+ {%- elif m.role == 'tool' -%}
96
+ {%- if m.content is string -%}
97
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
98
+ {{- '<|observation|>' }}
99
+ {%- endif %}
100
+ {{- '\n<tool_response>\n' }}
101
+ {{- m.content }}
102
+ {{- '\n</tool_response>' }}
103
+ {% elif m.content is iterable and m.content is not mapping %}
104
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
105
+ {{- '<|observation|>' }}
106
+ {%- endif %}
107
+ {{- '\n<tool_response>\n' }}
108
+ {%- for tr in m.content -%}
109
+ {%- if tr is mapping and tr.type is defined -%}
110
+ {%- set t = tr.type | lower -%}
111
+ {%- if t == 'text' and tr.text is defined -%}
112
+ {{ tr.text }}
113
+ {%- elif t in ['image', 'image_url'] -%}
114
+ <|begin_of_image|><|image|><|end_of_image|>
115
+ {%- elif t in ['video', 'video_url'] -%}
116
+ <|begin_of_video|><|video|><|end_of_video|>
117
+ {%- else -%}
118
+ {{ tr | tojson(ensure_ascii=False) }}
119
+ {%- endif -%}
120
+ {%- else -%}
121
+ {{ tr.output if tr.output is defined else tr }}
122
+ {%- endif -%}
123
+ {%- endfor -%}
124
+ {{- '\n</tool_response>' }}
125
+ {%- else -%}
126
+ <|observation|>{% for tr in m.content %}
127
+
128
+ <tool_response>
129
+ {{ tr.output if tr.output is defined else tr }}
130
+ </tool_response>{% endfor -%}
131
+ {% endif -%}
132
+ {%- elif m.role == 'system' -%}
133
+ <|system|>
134
+ {{ visible_text(m.content) }}
135
+ {%- endif -%}
136
+ {%- endfor -%}
137
+ {%- if add_generation_prompt -%}
138
+ <|assistant|>
139
+ {{'<think></think>\n' if (enable_thinking is defined and not enable_thinking) else ''}}
140
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GlmOcrForConditionalGeneration"
4
+ ],
5
+ "model_type": "glm_ocr",
6
+ "text_config": {
7
+ "model_type": "glm_ocr_text",
8
+ "pad_token_id": 59246,
9
+ "vocab_size": 59392,
10
+ "eos_token_id": [
11
+ 59246,
12
+ 59253
13
+ ],
14
+ "attention_bias": false,
15
+ "attention_dropout": 0.0,
16
+ "head_dim": 128,
17
+ "hidden_act": "silu",
18
+ "hidden_size": 1536,
19
+ "initializer_range": 0.02,
20
+ "intermediate_size": 4608,
21
+ "max_position_embeddings": 131072,
22
+ "num_attention_heads": 16,
23
+ "num_hidden_layers": 16,
24
+ "num_nextn_predict_layers": 1,
25
+ "num_key_value_heads": 8,
26
+ "rms_norm_eps": 1e-05,
27
+ "dtype": "bfloat16",
28
+ "rope_parameters": {
29
+ "rope_type": "default",
30
+ "mrope_section": [
31
+ 16,
32
+ 24,
33
+ 24
34
+ ],
35
+ "partial_rotary_factor": 1.0,
36
+ "rope_theta": 10000
37
+ },
38
+ "tie_word_embeddings": false,
39
+ "use_cache": true
40
+ },
41
+ "vision_config": {
42
+ "model_type": "glm_ocr_vision",
43
+ "hidden_size": 1024,
44
+ "depth": 24,
45
+ "num_heads": 16,
46
+ "attention_bias": true,
47
+ "intermediate_size": 4096,
48
+ "hidden_act": "silu",
49
+ "hidden_dropout_prob": 0.0,
50
+ "initializer_range": 0.02,
51
+ "image_size": 336,
52
+ "patch_size": 14,
53
+ "out_hidden_size": 1536,
54
+ "rms_norm_eps": 1e-05,
55
+ "spatial_merge_size": 2,
56
+ "temporal_patch_size": 2
57
+ },
58
+ "image_start_token_id": 59256,
59
+ "image_end_token_id": 59257,
60
+ "video_start_token_id": 59258,
61
+ "video_end_token_id": 59259,
62
+ "image_token_id": 59280,
63
+ "video_token_id": 59281,
64
+ "transformers_version": "5.0.1dev0"
65
+ }
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "do_sample": false,
4
+ "eos_token_id": [
5
+ 59246,
6
+ 59253
7
+ ],
8
+ "pad_token_id": 59246,
9
+ "transformers_version": "5.0.1dev0"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a16eb0de98d199293371c560f95f83130d2a2c9612449df16839f08ff9498815
3
+ size 2650579464
preprocessor_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "size": {"shortest_edge": 12544, "longest_edge": 9633792},
3
+ "do_rescale": true,
4
+ "patch_size": 14,
5
+ "temporal_patch_size": 2,
6
+ "merge_size": 2,
7
+ "image_mean": [0.48145466, 0.4578275, 0.40821073],
8
+ "image_std": [0.26862954, 0.26130258, 0.27577711],
9
+ "image_processor_type": "Glm46VImageProcessor",
10
+ "processor_class": "Glm46VProcessor"
11
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "clean_up_tokenization_spaces": false,
4
+ "eos_token": "<|endoftext|>",
5
+ "extra_special_tokens": [
6
+ "<|endoftext|>",
7
+ "[MASK]",
8
+ "[gMASK]",
9
+ "[sMASK]",
10
+ "<sop>",
11
+ "<eop>",
12
+ "<|system|>",
13
+ "<|user|>",
14
+ "<|assistant|>",
15
+ "<|observation|>",
16
+ "<|begin_of_image|>",
17
+ "<|end_of_image|>",
18
+ "<|begin_of_video|>",
19
+ "<|end_of_video|>",
20
+ "<|begin_of_audio|>",
21
+ "<|end_of_audio|>",
22
+ "<|begin_of_transcription|>",
23
+ "<|end_of_transcription|>",
24
+ "<|code_prefix|>",
25
+ "<|code_middle|>",
26
+ "<|code_suffix|>",
27
+ "<think>",
28
+ "</think>",
29
+ "<tool_call>",
30
+ "</tool_call>",
31
+ "<tool_response>",
32
+ "</tool_response>",
33
+ "<arg_key>",
34
+ "</arg_key>",
35
+ "<arg_value>",
36
+ "</arg_value>",
37
+ "/nothink",
38
+ "<|begin_of_box|>",
39
+ "<|end_of_box|>",
40
+ "<|image|>",
41
+ "<|video|>"
42
+ ],
43
+ "is_local": true,
44
+ "model_max_length": 655380,
45
+ "pad_token": "<|endoftext|>",
46
+ "padding_side": "left",
47
+ "processor_class": "Glm46VProcessor",
48
+ "tokenizer_class": "TokenizersBackend"
49
+ }