Upload model from source account
Browse files- README.md +224 -0
- chat_template.jinja +140 -0
- config.json +65 -0
- generation_config.json +10 -0
- model.safetensors +3 -0
- preprocessor_config.json +11 -0
- tokenizer.json +0 -0
- tokenizer_config.json +49 -0
README.md
ADDED
|
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- zh
|
| 5 |
+
- en
|
| 6 |
+
- fr
|
| 7 |
+
- es
|
| 8 |
+
- ru
|
| 9 |
+
- de
|
| 10 |
+
- ja
|
| 11 |
+
- ko
|
| 12 |
+
pipeline_tag: image-to-text
|
| 13 |
+
library_name: transformers
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# GLM-OCR
|
| 17 |
+
|
| 18 |
+
<div align="center">
|
| 19 |
+
<img src=https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/logo.svg width="40%"/>
|
| 20 |
+
</div>
|
| 21 |
+
<p align="center">
|
| 22 |
+
👋 Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/wechat.jpg" target="_blank">WeChat</a> and <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community
|
| 23 |
+
<br>
|
| 24 |
+
📍 Use GLM-OCR's <a href="https://docs.z.ai/guides/vlm/glm-ocr" target="_blank">API</a>
|
| 25 |
+
<br>
|
| 26 |
+
👉 <a href="https://github.com/zai-org/GLM-OCR" target="_blank">GLM-OCR SDK</a> Recommended
|
| 27 |
+
</p>
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
## Introduction
|
| 31 |
+
|
| 32 |
+
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It introduces Multi-Token Prediction (MTP) loss and stable full-task reinforcement learning to improve training efficiency, recognition accuracy, and generalization. The model integrates the CogViT visual encoder pre-trained on large-scale image–text data, a lightweight cross-modal connector with efficient token downsampling, and a GLM-0.5B language decoder. Combined with a two-stage pipeline of layout analysis and parallel recognition based on PP-DocLayout-V3, GLM-OCR delivers robust and high-quality OCR performance across diverse document layouts.
|
| 33 |
+
|
| 34 |
+
**Key Features**
|
| 35 |
+
|
| 36 |
+
- **State-of-the-Art Performance**: Achieves a score of 94.62 on OmniDocBench V1.5, ranking #1 overall, and delivers state-of-the-art results across major document understanding benchmarks, including formula recognition, table recognition, and information extraction.
|
| 37 |
+
|
| 38 |
+
- **Optimized for Real-World Scenarios**: Designed and optimized for practical business use cases, maintaining robust performance on complex tables, code-heavy documents, seals, and other challenging real-world layouts.
|
| 39 |
+
|
| 40 |
+
- **Efficient Inference**: With only 0.9B parameters, GLM-OCR supports deployment via vLLM, SGLang, and Ollama, significantly reducing inference latency and compute cost, making it ideal for high-concurrency services and edge deployments.
|
| 41 |
+
|
| 42 |
+
- **Easy to Use**: Fully open-sourced and equipped with a comprehensive [SDK](https://github.com/zai-org/GLM-OCR) and inference toolchain, offering simple installation, one-line invocation, and smooth integration into existing production pipelines.
|
| 43 |
+
|
| 44 |
+
## Performance
|
| 45 |
+
|
| 46 |
+
- Document Parsing & Information Extraction
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
- Real-World Scenarios Performance
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
- Speed Test
|
| 57 |
+
|
| 58 |
+
For speed, we compared different OCR methods under identical hardware and testing conditions (single replica, single concurrency), evaluating their performance in parsing and exporting Markdown files from both image and PDF inputs. Results show GLM-OCR achieves a throughput of 1.86 pages/second for PDF documents and 0.67 images/second for images, significantly outperforming comparable models.
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+
## Usage
|
| 63 |
+
|
| 64 |
+
### Official SDK
|
| 65 |
+
|
| 66 |
+
For document parsing tasks, we strongly recommend using our [official SDK](https://github.com/zai-org/GLM-OCR).
|
| 67 |
+
Compared with model-only inference, the SDK integrates PP-DocLayoutV3 and provides a complete, easy-to-use pipeline for document parsing, including layout analysis and structured output generation. This significantly reduces the engineering overhead required to build end-to-end document intelligence systems.
|
| 68 |
+
|
| 69 |
+
Note that the SDK is currently designed for document parsing tasks only. For information extraction tasks, please refer to the following section and run inference directly with the model.
|
| 70 |
+
|
| 71 |
+
### vLLM
|
| 72 |
+
|
| 73 |
+
1. run
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
pip install -U vllm --extra-index-url https://wheels.vllm.ai/nightly
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
or using docker with:
|
| 80 |
+
```
|
| 81 |
+
docker pull vllm/vllm-openai:nightly
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
2. run with:
|
| 85 |
+
|
| 86 |
+
```bash
|
| 87 |
+
pip install git+https://github.com/huggingface/transformers.git
|
| 88 |
+
vllm serve zai-org/GLM-OCR --allowed-local-media-path / --port 8080
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
### SGLang
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
1. using docker with:
|
| 95 |
+
|
| 96 |
+
```bash
|
| 97 |
+
docker pull lmsysorg/sglang:dev
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
or build it from source with:
|
| 101 |
+
|
| 102 |
+
```bash
|
| 103 |
+
pip install git+https://github.com/sgl-project/sglang.git#subdirectory=python
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
2. run with:
|
| 107 |
+
|
| 108 |
+
```bash
|
| 109 |
+
pip install git+https://github.com/huggingface/transformers.git
|
| 110 |
+
python -m sglang.launch_server --model zai-org/GLM-OCR --port 8080
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
### Ollama
|
| 114 |
+
|
| 115 |
+
1. Download [Ollama](https://ollama.com/download).
|
| 116 |
+
2. run with:
|
| 117 |
+
|
| 118 |
+
```bash
|
| 119 |
+
ollama run glm-ocr
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
Ollama will automatically use image file path when an image is dragged into the terminal:
|
| 123 |
+
|
| 124 |
+
```bash
|
| 125 |
+
ollama run glm-ocr Text Recognition: ./image.png
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
### Transformers
|
| 129 |
+
|
| 130 |
+
```
|
| 131 |
+
pip install git+https://github.com/huggingface/transformers.git
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
```python
|
| 135 |
+
from transformers import AutoProcessor, AutoModelForImageTextToText
|
| 136 |
+
import torch
|
| 137 |
+
|
| 138 |
+
MODEL_PATH = "zai-org/GLM-OCR"
|
| 139 |
+
messages = [
|
| 140 |
+
{
|
| 141 |
+
"role": "user",
|
| 142 |
+
"content": [
|
| 143 |
+
{
|
| 144 |
+
"type": "image",
|
| 145 |
+
"url": "test_image.png"
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"type": "text",
|
| 149 |
+
"text": "Text Recognition:"
|
| 150 |
+
}
|
| 151 |
+
],
|
| 152 |
+
}
|
| 153 |
+
]
|
| 154 |
+
processor = AutoProcessor.from_pretrained(MODEL_PATH)
|
| 155 |
+
model = AutoModelForImageTextToText.from_pretrained(
|
| 156 |
+
pretrained_model_name_or_path=MODEL_PATH,
|
| 157 |
+
torch_dtype="auto",
|
| 158 |
+
device_map="auto",
|
| 159 |
+
)
|
| 160 |
+
inputs = processor.apply_chat_template(
|
| 161 |
+
messages,
|
| 162 |
+
tokenize=True,
|
| 163 |
+
add_generation_prompt=True,
|
| 164 |
+
return_dict=True,
|
| 165 |
+
return_tensors="pt"
|
| 166 |
+
).to(model.device)
|
| 167 |
+
inputs.pop("token_type_ids", None)
|
| 168 |
+
generated_ids = model.generate(**inputs, max_new_tokens=8192)
|
| 169 |
+
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
|
| 170 |
+
print(output_text)
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
### Prompt Limited
|
| 174 |
+
|
| 175 |
+
GLM-OCR currently supports two types of prompt scenarios:
|
| 176 |
+
|
| 177 |
+
1. **Document Parsing** – extract raw content from documents. Supported tasks include:
|
| 178 |
+
|
| 179 |
+
```python
|
| 180 |
+
{
|
| 181 |
+
"text": "Text Recognition:",
|
| 182 |
+
"formula": "Formula Recognition:",
|
| 183 |
+
"table": "Table Recognition:"
|
| 184 |
+
}
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
2. **Information Extraction** – extract structured information from documents. Prompts must follow a strict JSON schema. For example, to extract personal ID information:
|
| 188 |
+
|
| 189 |
+
```python
|
| 190 |
+
请按下列JSON格式输出图中信息:
|
| 191 |
+
{
|
| 192 |
+
"id_number": "",
|
| 193 |
+
"last_name": "",
|
| 194 |
+
"first_name": "",
|
| 195 |
+
"date_of_birth": "",
|
| 196 |
+
"address": {
|
| 197 |
+
"street": "",
|
| 198 |
+
"city": "",
|
| 199 |
+
"state": "",
|
| 200 |
+
"zip_code": ""
|
| 201 |
+
},
|
| 202 |
+
"dates": {
|
| 203 |
+
"issue_date": "",
|
| 204 |
+
"expiration_date": ""
|
| 205 |
+
},
|
| 206 |
+
"sex": ""
|
| 207 |
+
}
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
⚠️ Note: When using information extraction, the output must strictly adhere to the defined JSON schema to ensure downstream processing compatibility.
|
| 211 |
+
|
| 212 |
+
## Acknowledgement
|
| 213 |
+
|
| 214 |
+
This project is inspired by the excellent work of the following projects and communities:
|
| 215 |
+
|
| 216 |
+
- [PP-DocLayout-V3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3)
|
| 217 |
+
- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
|
| 218 |
+
- [MinerU](https://github.com/opendatalab/MinerU)
|
| 219 |
+
|
| 220 |
+
## License
|
| 221 |
+
|
| 222 |
+
The GLM-OCR model is released under the MIT License.
|
| 223 |
+
|
| 224 |
+
The complete OCR pipeline integrates [PP-DocLayoutV3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3) for document layout analysis, which is licensed under the Apache License 2.0. Users should comply with both licenses when using this project.
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[gMASK]<sop>
|
| 2 |
+
{%- if tools -%}
|
| 3 |
+
<|system|>
|
| 4 |
+
# Tools
|
| 5 |
+
|
| 6 |
+
You may call one or more functions to assist with the user query.
|
| 7 |
+
|
| 8 |
+
You are provided with function signatures within <tools></tools> XML tags:
|
| 9 |
+
<tools>
|
| 10 |
+
{% for tool in tools %}
|
| 11 |
+
{{ tool | tojson(ensure_ascii=False) }}
|
| 12 |
+
{% endfor %}
|
| 13 |
+
</tools>
|
| 14 |
+
|
| 15 |
+
For each function call, output the function name and arguments within the following XML format:
|
| 16 |
+
<tool_call>{function-name}
|
| 17 |
+
<arg_key>{arg-key-1}</arg_key>
|
| 18 |
+
<arg_value>{arg-value-1}</arg_value>
|
| 19 |
+
<arg_key>{arg-key-2}</arg_key>
|
| 20 |
+
<arg_value>{arg-value-2}</arg_value>
|
| 21 |
+
...
|
| 22 |
+
</tool_call>{%- endif -%}
|
| 23 |
+
{%- macro visible_text(content) -%}
|
| 24 |
+
{%- if content is string -%}
|
| 25 |
+
{{- content }}
|
| 26 |
+
{%- elif content is iterable and content is not mapping -%}
|
| 27 |
+
{%- for item in content -%}
|
| 28 |
+
{%- if item is mapping and item.type == 'text' -%}
|
| 29 |
+
{{- item.text }}
|
| 30 |
+
{%- elif item is mapping and (item.type == 'image' or 'image' in item) -%}
|
| 31 |
+
<|begin_of_image|><|image|><|end_of_image|>
|
| 32 |
+
{%- elif item is mapping and (item.type == 'video' or 'video' in item) -%}
|
| 33 |
+
<|begin_of_video|><|video|><|end_of_video|>
|
| 34 |
+
{%- elif item is string -%}
|
| 35 |
+
{{- item }}
|
| 36 |
+
{%- endif -%}
|
| 37 |
+
{%- endfor -%}
|
| 38 |
+
{%- else -%}
|
| 39 |
+
{{- content }}
|
| 40 |
+
{%- endif -%}
|
| 41 |
+
{%- endmacro -%}
|
| 42 |
+
{%- set ns = namespace(last_user_index=-1) %}
|
| 43 |
+
{%- for m in messages %}
|
| 44 |
+
{%- if m.role == 'user' %}
|
| 45 |
+
{% set ns.last_user_index = loop.index0 -%}
|
| 46 |
+
{%- endif %}
|
| 47 |
+
{%- endfor %}
|
| 48 |
+
{% for m in messages %}
|
| 49 |
+
{%- if m.role == 'user' -%}<|user|>
|
| 50 |
+
{% if m.content is string %}
|
| 51 |
+
{{ m.content }}
|
| 52 |
+
{%- else %}
|
| 53 |
+
{%- for item in m.content %}
|
| 54 |
+
{% if item.type == 'video' or 'video' in item %}
|
| 55 |
+
<|begin_of_video|><|video|><|end_of_video|>{% elif item.type == 'image' or 'image' in item %}
|
| 56 |
+
<|begin_of_image|><|image|><|end_of_image|>{% elif item.type == 'text' %}
|
| 57 |
+
{{ item.text }}
|
| 58 |
+
{%- endif %}
|
| 59 |
+
{%- endfor %}
|
| 60 |
+
{%- endif %}
|
| 61 |
+
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
|
| 62 |
+
{%- elif m.role == 'assistant' -%}
|
| 63 |
+
<|assistant|>
|
| 64 |
+
{%- set reasoning_content = '' %}
|
| 65 |
+
{%- set content = visible_text(m.content) %}
|
| 66 |
+
{%- if m.reasoning_content is string %}
|
| 67 |
+
{%- set reasoning_content = m.reasoning_content %}
|
| 68 |
+
{%- else %}
|
| 69 |
+
{%- if '</think>' in content %}
|
| 70 |
+
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
| 71 |
+
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
| 72 |
+
{%- endif %}
|
| 73 |
+
{%- endif %}
|
| 74 |
+
{%- if loop.index0 > ns.last_user_index and reasoning_content -%}
|
| 75 |
+
{{ '\n<think>' + reasoning_content.strip() + '</think>'}}
|
| 76 |
+
{%- else -%}
|
| 77 |
+
{{ '\n<think></think>' }}
|
| 78 |
+
{%- endif -%}
|
| 79 |
+
{%- if content.strip() -%}
|
| 80 |
+
{{ '\n' + content.strip() }}
|
| 81 |
+
{%- endif -%}
|
| 82 |
+
{% if m.tool_calls %}
|
| 83 |
+
{% for tc in m.tool_calls %}
|
| 84 |
+
{%- if tc.function %}
|
| 85 |
+
{%- set tc = tc.function %}
|
| 86 |
+
{%- endif %}
|
| 87 |
+
{{ '\n<tool_call>' + tc.name }}
|
| 88 |
+
{% set _args = tc.arguments %}
|
| 89 |
+
{% for k, v in _args.items() %}
|
| 90 |
+
<arg_key>{{ k }}</arg_key>
|
| 91 |
+
<arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
|
| 92 |
+
{% endfor %}
|
| 93 |
+
</tool_call>{% endfor %}
|
| 94 |
+
{% endif %}
|
| 95 |
+
{%- elif m.role == 'tool' -%}
|
| 96 |
+
{%- if m.content is string -%}
|
| 97 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
| 98 |
+
{{- '<|observation|>' }}
|
| 99 |
+
{%- endif %}
|
| 100 |
+
{{- '\n<tool_response>\n' }}
|
| 101 |
+
{{- m.content }}
|
| 102 |
+
{{- '\n</tool_response>' }}
|
| 103 |
+
{% elif m.content is iterable and m.content is not mapping %}
|
| 104 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
| 105 |
+
{{- '<|observation|>' }}
|
| 106 |
+
{%- endif %}
|
| 107 |
+
{{- '\n<tool_response>\n' }}
|
| 108 |
+
{%- for tr in m.content -%}
|
| 109 |
+
{%- if tr is mapping and tr.type is defined -%}
|
| 110 |
+
{%- set t = tr.type | lower -%}
|
| 111 |
+
{%- if t == 'text' and tr.text is defined -%}
|
| 112 |
+
{{ tr.text }}
|
| 113 |
+
{%- elif t in ['image', 'image_url'] -%}
|
| 114 |
+
<|begin_of_image|><|image|><|end_of_image|>
|
| 115 |
+
{%- elif t in ['video', 'video_url'] -%}
|
| 116 |
+
<|begin_of_video|><|video|><|end_of_video|>
|
| 117 |
+
{%- else -%}
|
| 118 |
+
{{ tr | tojson(ensure_ascii=False) }}
|
| 119 |
+
{%- endif -%}
|
| 120 |
+
{%- else -%}
|
| 121 |
+
{{ tr.output if tr.output is defined else tr }}
|
| 122 |
+
{%- endif -%}
|
| 123 |
+
{%- endfor -%}
|
| 124 |
+
{{- '\n</tool_response>' }}
|
| 125 |
+
{%- else -%}
|
| 126 |
+
<|observation|>{% for tr in m.content %}
|
| 127 |
+
|
| 128 |
+
<tool_response>
|
| 129 |
+
{{ tr.output if tr.output is defined else tr }}
|
| 130 |
+
</tool_response>{% endfor -%}
|
| 131 |
+
{% endif -%}
|
| 132 |
+
{%- elif m.role == 'system' -%}
|
| 133 |
+
<|system|>
|
| 134 |
+
{{ visible_text(m.content) }}
|
| 135 |
+
{%- endif -%}
|
| 136 |
+
{%- endfor -%}
|
| 137 |
+
{%- if add_generation_prompt -%}
|
| 138 |
+
<|assistant|>
|
| 139 |
+
{{'<think></think>\n' if (enable_thinking is defined and not enable_thinking) else ''}}
|
| 140 |
+
{%- endif -%}
|
config.json
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"GlmOcrForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"model_type": "glm_ocr",
|
| 6 |
+
"text_config": {
|
| 7 |
+
"model_type": "glm_ocr_text",
|
| 8 |
+
"pad_token_id": 59246,
|
| 9 |
+
"vocab_size": 59392,
|
| 10 |
+
"eos_token_id": [
|
| 11 |
+
59246,
|
| 12 |
+
59253
|
| 13 |
+
],
|
| 14 |
+
"attention_bias": false,
|
| 15 |
+
"attention_dropout": 0.0,
|
| 16 |
+
"head_dim": 128,
|
| 17 |
+
"hidden_act": "silu",
|
| 18 |
+
"hidden_size": 1536,
|
| 19 |
+
"initializer_range": 0.02,
|
| 20 |
+
"intermediate_size": 4608,
|
| 21 |
+
"max_position_embeddings": 131072,
|
| 22 |
+
"num_attention_heads": 16,
|
| 23 |
+
"num_hidden_layers": 16,
|
| 24 |
+
"num_nextn_predict_layers": 1,
|
| 25 |
+
"num_key_value_heads": 8,
|
| 26 |
+
"rms_norm_eps": 1e-05,
|
| 27 |
+
"dtype": "bfloat16",
|
| 28 |
+
"rope_parameters": {
|
| 29 |
+
"rope_type": "default",
|
| 30 |
+
"mrope_section": [
|
| 31 |
+
16,
|
| 32 |
+
24,
|
| 33 |
+
24
|
| 34 |
+
],
|
| 35 |
+
"partial_rotary_factor": 1.0,
|
| 36 |
+
"rope_theta": 10000
|
| 37 |
+
},
|
| 38 |
+
"tie_word_embeddings": false,
|
| 39 |
+
"use_cache": true
|
| 40 |
+
},
|
| 41 |
+
"vision_config": {
|
| 42 |
+
"model_type": "glm_ocr_vision",
|
| 43 |
+
"hidden_size": 1024,
|
| 44 |
+
"depth": 24,
|
| 45 |
+
"num_heads": 16,
|
| 46 |
+
"attention_bias": true,
|
| 47 |
+
"intermediate_size": 4096,
|
| 48 |
+
"hidden_act": "silu",
|
| 49 |
+
"hidden_dropout_prob": 0.0,
|
| 50 |
+
"initializer_range": 0.02,
|
| 51 |
+
"image_size": 336,
|
| 52 |
+
"patch_size": 14,
|
| 53 |
+
"out_hidden_size": 1536,
|
| 54 |
+
"rms_norm_eps": 1e-05,
|
| 55 |
+
"spatial_merge_size": 2,
|
| 56 |
+
"temporal_patch_size": 2
|
| 57 |
+
},
|
| 58 |
+
"image_start_token_id": 59256,
|
| 59 |
+
"image_end_token_id": 59257,
|
| 60 |
+
"video_start_token_id": 59258,
|
| 61 |
+
"video_end_token_id": 59259,
|
| 62 |
+
"image_token_id": 59280,
|
| 63 |
+
"video_token_id": 59281,
|
| 64 |
+
"transformers_version": "5.0.1dev0"
|
| 65 |
+
}
|
generation_config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"do_sample": false,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
59246,
|
| 6 |
+
59253
|
| 7 |
+
],
|
| 8 |
+
"pad_token_id": 59246,
|
| 9 |
+
"transformers_version": "5.0.1dev0"
|
| 10 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a16eb0de98d199293371c560f95f83130d2a2c9612449df16839f08ff9498815
|
| 3 |
+
size 2650579464
|
preprocessor_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"size": {"shortest_edge": 12544, "longest_edge": 9633792},
|
| 3 |
+
"do_rescale": true,
|
| 4 |
+
"patch_size": 14,
|
| 5 |
+
"temporal_patch_size": 2,
|
| 6 |
+
"merge_size": 2,
|
| 7 |
+
"image_mean": [0.48145466, 0.4578275, 0.40821073],
|
| 8 |
+
"image_std": [0.26862954, 0.26130258, 0.27577711],
|
| 9 |
+
"image_processor_type": "Glm46VImageProcessor",
|
| 10 |
+
"processor_class": "Glm46VProcessor"
|
| 11 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"backend": "tokenizers",
|
| 3 |
+
"clean_up_tokenization_spaces": false,
|
| 4 |
+
"eos_token": "<|endoftext|>",
|
| 5 |
+
"extra_special_tokens": [
|
| 6 |
+
"<|endoftext|>",
|
| 7 |
+
"[MASK]",
|
| 8 |
+
"[gMASK]",
|
| 9 |
+
"[sMASK]",
|
| 10 |
+
"<sop>",
|
| 11 |
+
"<eop>",
|
| 12 |
+
"<|system|>",
|
| 13 |
+
"<|user|>",
|
| 14 |
+
"<|assistant|>",
|
| 15 |
+
"<|observation|>",
|
| 16 |
+
"<|begin_of_image|>",
|
| 17 |
+
"<|end_of_image|>",
|
| 18 |
+
"<|begin_of_video|>",
|
| 19 |
+
"<|end_of_video|>",
|
| 20 |
+
"<|begin_of_audio|>",
|
| 21 |
+
"<|end_of_audio|>",
|
| 22 |
+
"<|begin_of_transcription|>",
|
| 23 |
+
"<|end_of_transcription|>",
|
| 24 |
+
"<|code_prefix|>",
|
| 25 |
+
"<|code_middle|>",
|
| 26 |
+
"<|code_suffix|>",
|
| 27 |
+
"<think>",
|
| 28 |
+
"</think>",
|
| 29 |
+
"<tool_call>",
|
| 30 |
+
"</tool_call>",
|
| 31 |
+
"<tool_response>",
|
| 32 |
+
"</tool_response>",
|
| 33 |
+
"<arg_key>",
|
| 34 |
+
"</arg_key>",
|
| 35 |
+
"<arg_value>",
|
| 36 |
+
"</arg_value>",
|
| 37 |
+
"/nothink",
|
| 38 |
+
"<|begin_of_box|>",
|
| 39 |
+
"<|end_of_box|>",
|
| 40 |
+
"<|image|>",
|
| 41 |
+
"<|video|>"
|
| 42 |
+
],
|
| 43 |
+
"is_local": true,
|
| 44 |
+
"model_max_length": 655380,
|
| 45 |
+
"pad_token": "<|endoftext|>",
|
| 46 |
+
"padding_side": "left",
|
| 47 |
+
"processor_class": "Glm46VProcessor",
|
| 48 |
+
"tokenizer_class": "TokenizersBackend"
|
| 49 |
+
}
|