Upload files
Browse files- README.md +201 -3
- chat_template.jinja +140 -0
- config.json +65 -0
- configuration.json +1 -0
- generation_config.json +10 -0
- preprocessor_config.json +11 -0
- tokenizer.json +0 -0
- tokenizer_config.json +49 -0
README.md
CHANGED
|
@@ -1,3 +1,201 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- zh
|
| 5 |
+
- en
|
| 6 |
+
- fr
|
| 7 |
+
- es
|
| 8 |
+
- ru
|
| 9 |
+
- de
|
| 10 |
+
- ja
|
| 11 |
+
- ko
|
| 12 |
+
pipeline_tag: image-to-text
|
| 13 |
+
library_name: transformers
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# GLM-OCR
|
| 17 |
+
|
| 18 |
+
<div align="center">
|
| 19 |
+
<img src=https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/logo.svg width="40%"/>
|
| 20 |
+
</div>
|
| 21 |
+
<p align="center">
|
| 22 |
+
👋 Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/wechat.png" target="_blank">WeChat</a> and <a href="https://discord.gg/8KFjEec7" target="_blank">Discord</a> community
|
| 23 |
+
<br>
|
| 24 |
+
📍 Use GLM-OCR's <a href="https://docs.z.ai/guides/image/glm-ocr" target="_blank">API</a>
|
| 25 |
+
</p>
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
## Introduction
|
| 29 |
+
|
| 30 |
+
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It introduces Multi-Token Prediction (MTP) loss and stable full-task reinforcement learning to improve training efficiency, recognition accuracy, and generalization. The model integrates the CogViT visual encoder pre-trained on large-scale image–text data, a lightweight cross-modal connector with efficient token downsampling, and a GLM-0.5B language decoder. Combined with a two-stage pipeline of layout analysis and parallel recognition based on PP-DocLayout-V3, GLM-OCR delivers robust and high-quality OCR performance across diverse document layouts.
|
| 31 |
+
|
| 32 |
+
**Key Features**
|
| 33 |
+
|
| 34 |
+
- **State-of-the-Art Performance**: Achieves a score of 94.62 on OmniDocBench V1.5, ranking #1 overall, and delivers state-of-the-art results across major document understanding benchmarks, including formula recognition, table recognition, and information extraction.
|
| 35 |
+
|
| 36 |
+
- **Optimized for Real-World Scenarios**: Designed and optimized for practical business use cases, maintaining robust performance on complex tables, code-heavy documents, seals, and other challenging real-world layouts.
|
| 37 |
+
|
| 38 |
+
- **Efficient Inference**: With only 0.9B parameters, GLM-OCR supports deployment via vLLM, SGLang, and Ollama, significantly reducing inference latency and compute cost, making it ideal for high-concurrency services and edge deployments.
|
| 39 |
+
|
| 40 |
+
- **Easy to Use**: Fully open-sourced and equipped with a comprehensive [SDK](https://github.com/zai-org/GLM-OCR) and inference toolchain, offering simple installation, one-line invocation, and smooth integration into existing production pipelines.
|
| 41 |
+
|
| 42 |
+
## Usage
|
| 43 |
+
|
| 44 |
+
### vLLM
|
| 45 |
+
|
| 46 |
+
1. run
|
| 47 |
+
|
| 48 |
+
```bash
|
| 49 |
+
pip install -U vllm --extra-index-url https://wheels.vllm.ai/nightly
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
or using docker with:
|
| 53 |
+
```
|
| 54 |
+
docker pull vllm/vllm-openai:nightly
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
2. run with:
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
pip install git+https://github.com/huggingface/transformers.git
|
| 61 |
+
vllm serve zai-org/GLM-OCR --allowed-local-media-path / --port 8080
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
### SGLang
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
1. using docker with:
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
docker pull lmsysorg/sglang:dev
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
or build it from source with:
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
pip install git+https://github.com/sgl-project/sglang.git#subdirectory=python
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
2. run with:
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
pip install git+https://github.com/huggingface/transformers.git
|
| 83 |
+
python -m sglang.launch_server --model zai-org/GLM-OCR --port 8080
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
### Ollama
|
| 87 |
+
|
| 88 |
+
1. Download [Ollama](https://ollama.com/download).
|
| 89 |
+
2. run with:
|
| 90 |
+
|
| 91 |
+
```bash
|
| 92 |
+
ollama run glm-ocr
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
Ollama will automatically use image file path when an image is dragged into the terminal:
|
| 96 |
+
|
| 97 |
+
```bash
|
| 98 |
+
ollama run glm-ocr Text Recognition: ./image.png
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
### Transformers
|
| 102 |
+
|
| 103 |
+
```
|
| 104 |
+
pip install git+https://github.com/huggingface/transformers.git
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
```python
|
| 108 |
+
from transformers import AutoProcessor, AutoModelForImageTextToText
|
| 109 |
+
import torch
|
| 110 |
+
|
| 111 |
+
MODEL_PATH = "zai-org/GLM-OCR"
|
| 112 |
+
messages = [
|
| 113 |
+
{
|
| 114 |
+
"role": "user",
|
| 115 |
+
"content": [
|
| 116 |
+
{
|
| 117 |
+
"type": "image",
|
| 118 |
+
"url": "test_image.png"
|
| 119 |
+
},
|
| 120 |
+
{
|
| 121 |
+
"type": "text",
|
| 122 |
+
"text": "Text Recognition:"
|
| 123 |
+
}
|
| 124 |
+
],
|
| 125 |
+
}
|
| 126 |
+
]
|
| 127 |
+
processor = AutoProcessor.from_pretrained(MODEL_PATH)
|
| 128 |
+
model = AutoModelForImageTextToText.from_pretrained(
|
| 129 |
+
pretrained_model_name_or_path=MODEL_PATH,
|
| 130 |
+
torch_dtype="auto",
|
| 131 |
+
device_map="auto",
|
| 132 |
+
)
|
| 133 |
+
inputs = processor.apply_chat_template(
|
| 134 |
+
messages,
|
| 135 |
+
tokenize=True,
|
| 136 |
+
add_generation_prompt=True,
|
| 137 |
+
return_dict=True,
|
| 138 |
+
return_tensors="pt"
|
| 139 |
+
).to(model.device)
|
| 140 |
+
inputs.pop("token_type_ids", None)
|
| 141 |
+
generated_ids = model.generate(**inputs, max_new_tokens=8192)
|
| 142 |
+
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
|
| 143 |
+
print(output_text)
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
### Prompt Limited
|
| 147 |
+
|
| 148 |
+
GLM-OCR currently supports two types of prompt scenarios:
|
| 149 |
+
|
| 150 |
+
1. **Document Parsing** – extract raw content from documents. Supported tasks include:
|
| 151 |
+
|
| 152 |
+
```python
|
| 153 |
+
{
|
| 154 |
+
"text": "Text Recognition:",
|
| 155 |
+
"formula": "Formula Recognition:",
|
| 156 |
+
"table": "Table Recognition:"
|
| 157 |
+
}
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
2. **Information Extraction** – extract structured information from documents. Prompts must follow a strict JSON schema. For example, to extract personal ID information:
|
| 161 |
+
|
| 162 |
+
```python
|
| 163 |
+
请按下列JSON格式输出图中信息:
|
| 164 |
+
{
|
| 165 |
+
"id_number": "",
|
| 166 |
+
"last_name": "",
|
| 167 |
+
"first_name": "",
|
| 168 |
+
"date_of_birth": "",
|
| 169 |
+
"address": {
|
| 170 |
+
"street": "",
|
| 171 |
+
"city": "",
|
| 172 |
+
"state": "",
|
| 173 |
+
"zip_code": ""
|
| 174 |
+
},
|
| 175 |
+
"dates": {
|
| 176 |
+
"issue_date": "",
|
| 177 |
+
"expiration_date": ""
|
| 178 |
+
},
|
| 179 |
+
"sex": ""
|
| 180 |
+
}
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
⚠️ Note: When using information extraction, the output must strictly adhere to the defined JSON schema to ensure downstream processing compatibility.
|
| 184 |
+
|
| 185 |
+
## GLM-OCR SDK
|
| 186 |
+
|
| 187 |
+
We provide an easy-to-use SDK for using GLM-OCR more efficiently and conveniently. please check our [github](https://github.com/zai-org/GLM-OCR) to get more detail.
|
| 188 |
+
|
| 189 |
+
## Acknowledgement
|
| 190 |
+
|
| 191 |
+
This project is inspired by the excellent work of the following projects and communities:
|
| 192 |
+
|
| 193 |
+
- [PP-DocLayout-V3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3)
|
| 194 |
+
- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
|
| 195 |
+
- [MinerU](https://github.com/opendatalab/MinerU)
|
| 196 |
+
|
| 197 |
+
## License
|
| 198 |
+
|
| 199 |
+
The GLM-OCR model is released under the MIT License.
|
| 200 |
+
|
| 201 |
+
The complete OCR pipeline integrates [PP-DocLayoutV3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3) for document layout analysis, which is licensed under the Apache License 2.0. Users should comply with both licenses when using this project.
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[gMASK]<sop>
|
| 2 |
+
{%- if tools -%}
|
| 3 |
+
<|system|>
|
| 4 |
+
# Tools
|
| 5 |
+
|
| 6 |
+
You may call one or more functions to assist with the user query.
|
| 7 |
+
|
| 8 |
+
You are provided with function signatures within <tools></tools> XML tags:
|
| 9 |
+
<tools>
|
| 10 |
+
{% for tool in tools %}
|
| 11 |
+
{{ tool | tojson(ensure_ascii=False) }}
|
| 12 |
+
{% endfor %}
|
| 13 |
+
</tools>
|
| 14 |
+
|
| 15 |
+
For each function call, output the function name and arguments within the following XML format:
|
| 16 |
+
<tool_call>{function-name}
|
| 17 |
+
<arg_key>{arg-key-1}</arg_key>
|
| 18 |
+
<arg_value>{arg-value-1}</arg_value>
|
| 19 |
+
<arg_key>{arg-key-2}</arg_key>
|
| 20 |
+
<arg_value>{arg-value-2}</arg_value>
|
| 21 |
+
...
|
| 22 |
+
</tool_call>{%- endif -%}
|
| 23 |
+
{%- macro visible_text(content) -%}
|
| 24 |
+
{%- if content is string -%}
|
| 25 |
+
{{- content }}
|
| 26 |
+
{%- elif content is iterable and content is not mapping -%}
|
| 27 |
+
{%- for item in content -%}
|
| 28 |
+
{%- if item is mapping and item.type == 'text' -%}
|
| 29 |
+
{{- item.text }}
|
| 30 |
+
{%- elif item is mapping and (item.type == 'image' or 'image' in item) -%}
|
| 31 |
+
<|begin_of_image|><|image|><|end_of_image|>
|
| 32 |
+
{%- elif item is mapping and (item.type == 'video' or 'video' in item) -%}
|
| 33 |
+
<|begin_of_video|><|video|><|end_of_video|>
|
| 34 |
+
{%- elif item is string -%}
|
| 35 |
+
{{- item }}
|
| 36 |
+
{%- endif -%}
|
| 37 |
+
{%- endfor -%}
|
| 38 |
+
{%- else -%}
|
| 39 |
+
{{- content }}
|
| 40 |
+
{%- endif -%}
|
| 41 |
+
{%- endmacro -%}
|
| 42 |
+
{%- set ns = namespace(last_user_index=-1) %}
|
| 43 |
+
{%- for m in messages %}
|
| 44 |
+
{%- if m.role == 'user' %}
|
| 45 |
+
{% set ns.last_user_index = loop.index0 -%}
|
| 46 |
+
{%- endif %}
|
| 47 |
+
{%- endfor %}
|
| 48 |
+
{% for m in messages %}
|
| 49 |
+
{%- if m.role == 'user' -%}<|user|>
|
| 50 |
+
{% if m.content is string %}
|
| 51 |
+
{{ m.content }}
|
| 52 |
+
{%- else %}
|
| 53 |
+
{%- for item in m.content %}
|
| 54 |
+
{% if item.type == 'video' or 'video' in item %}
|
| 55 |
+
<|begin_of_video|><|video|><|end_of_video|>{% elif item.type == 'image' or 'image' in item %}
|
| 56 |
+
<|begin_of_image|><|image|><|end_of_image|>{% elif item.type == 'text' %}
|
| 57 |
+
{{ item.text }}
|
| 58 |
+
{%- endif %}
|
| 59 |
+
{%- endfor %}
|
| 60 |
+
{%- endif %}
|
| 61 |
+
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
|
| 62 |
+
{%- elif m.role == 'assistant' -%}
|
| 63 |
+
<|assistant|>
|
| 64 |
+
{%- set reasoning_content = '' %}
|
| 65 |
+
{%- set content = visible_text(m.content) %}
|
| 66 |
+
{%- if m.reasoning_content is string %}
|
| 67 |
+
{%- set reasoning_content = m.reasoning_content %}
|
| 68 |
+
{%- else %}
|
| 69 |
+
{%- if '</think>' in content %}
|
| 70 |
+
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
| 71 |
+
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
| 72 |
+
{%- endif %}
|
| 73 |
+
{%- endif %}
|
| 74 |
+
{%- if loop.index0 > ns.last_user_index and reasoning_content -%}
|
| 75 |
+
{{ '\n<think>' + reasoning_content.strip() + '</think>'}}
|
| 76 |
+
{%- else -%}
|
| 77 |
+
{{ '\n<think></think>' }}
|
| 78 |
+
{%- endif -%}
|
| 79 |
+
{%- if content.strip() -%}
|
| 80 |
+
{{ '\n' + content.strip() }}
|
| 81 |
+
{%- endif -%}
|
| 82 |
+
{% if m.tool_calls %}
|
| 83 |
+
{% for tc in m.tool_calls %}
|
| 84 |
+
{%- if tc.function %}
|
| 85 |
+
{%- set tc = tc.function %}
|
| 86 |
+
{%- endif %}
|
| 87 |
+
{{ '\n<tool_call>' + tc.name }}
|
| 88 |
+
{% set _args = tc.arguments %}
|
| 89 |
+
{% for k, v in _args.items() %}
|
| 90 |
+
<arg_key>{{ k }}</arg_key>
|
| 91 |
+
<arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
|
| 92 |
+
{% endfor %}
|
| 93 |
+
</tool_call>{% endfor %}
|
| 94 |
+
{% endif %}
|
| 95 |
+
{%- elif m.role == 'tool' -%}
|
| 96 |
+
{%- if m.content is string -%}
|
| 97 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
| 98 |
+
{{- '<|observation|>' }}
|
| 99 |
+
{%- endif %}
|
| 100 |
+
{{- '\n<tool_response>\n' }}
|
| 101 |
+
{{- m.content }}
|
| 102 |
+
{{- '\n</tool_response>' }}
|
| 103 |
+
{% elif m.content is iterable and m.content is not mapping %}
|
| 104 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
| 105 |
+
{{- '<|observation|>' }}
|
| 106 |
+
{%- endif %}
|
| 107 |
+
{{- '\n<tool_response>\n' }}
|
| 108 |
+
{%- for tr in m.content -%}
|
| 109 |
+
{%- if tr is mapping and tr.type is defined -%}
|
| 110 |
+
{%- set t = tr.type | lower -%}
|
| 111 |
+
{%- if t == 'text' and tr.text is defined -%}
|
| 112 |
+
{{ tr.text }}
|
| 113 |
+
{%- elif t in ['image', 'image_url'] -%}
|
| 114 |
+
<|begin_of_image|><|image|><|end_of_image|>
|
| 115 |
+
{%- elif t in ['video', 'video_url'] -%}
|
| 116 |
+
<|begin_of_video|><|video|><|end_of_video|>
|
| 117 |
+
{%- else -%}
|
| 118 |
+
{{ tr | tojson(ensure_ascii=False) }}
|
| 119 |
+
{%- endif -%}
|
| 120 |
+
{%- else -%}
|
| 121 |
+
{{ tr.output if tr.output is defined else tr }}
|
| 122 |
+
{%- endif -%}
|
| 123 |
+
{%- endfor -%}
|
| 124 |
+
{{- '\n</tool_response>' }}
|
| 125 |
+
{%- else -%}
|
| 126 |
+
<|observation|>{% for tr in m.content %}
|
| 127 |
+
|
| 128 |
+
<tool_response>
|
| 129 |
+
{{ tr.output if tr.output is defined else tr }}
|
| 130 |
+
</tool_response>{% endfor -%}
|
| 131 |
+
{% endif -%}
|
| 132 |
+
{%- elif m.role == 'system' -%}
|
| 133 |
+
<|system|>
|
| 134 |
+
{{ visible_text(m.content) }}
|
| 135 |
+
{%- endif -%}
|
| 136 |
+
{%- endfor -%}
|
| 137 |
+
{%- if add_generation_prompt -%}
|
| 138 |
+
<|assistant|>
|
| 139 |
+
{{'<think></think>\n' if (enable_thinking is defined and not enable_thinking) else ''}}
|
| 140 |
+
{%- endif -%}
|
config.json
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"GlmOcrForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"model_type": "glm_ocr",
|
| 6 |
+
"text_config": {
|
| 7 |
+
"model_type": "glm_ocr_text",
|
| 8 |
+
"pad_token_id": 59246,
|
| 9 |
+
"vocab_size": 59392,
|
| 10 |
+
"eos_token_id": [
|
| 11 |
+
59246,
|
| 12 |
+
59253
|
| 13 |
+
],
|
| 14 |
+
"attention_bias": false,
|
| 15 |
+
"attention_dropout": 0.0,
|
| 16 |
+
"head_dim": 128,
|
| 17 |
+
"hidden_act": "silu",
|
| 18 |
+
"hidden_size": 1536,
|
| 19 |
+
"initializer_range": 0.02,
|
| 20 |
+
"intermediate_size": 4608,
|
| 21 |
+
"max_position_embeddings": 131072,
|
| 22 |
+
"num_attention_heads": 16,
|
| 23 |
+
"num_hidden_layers": 16,
|
| 24 |
+
"num_nextn_predict_layers": 1,
|
| 25 |
+
"num_key_value_heads": 8,
|
| 26 |
+
"rms_norm_eps": 1e-05,
|
| 27 |
+
"dtype": "bfloat16",
|
| 28 |
+
"rope_parameters": {
|
| 29 |
+
"rope_type": "default",
|
| 30 |
+
"mrope_section": [
|
| 31 |
+
16,
|
| 32 |
+
24,
|
| 33 |
+
24
|
| 34 |
+
],
|
| 35 |
+
"partial_rotary_factor": 1.0,
|
| 36 |
+
"rope_theta": 10000
|
| 37 |
+
},
|
| 38 |
+
"tie_word_embeddings": false,
|
| 39 |
+
"use_cache": true
|
| 40 |
+
},
|
| 41 |
+
"vision_config": {
|
| 42 |
+
"model_type": "glm_ocr_vision",
|
| 43 |
+
"hidden_size": 1024,
|
| 44 |
+
"depth": 24,
|
| 45 |
+
"num_heads": 16,
|
| 46 |
+
"attention_bias": true,
|
| 47 |
+
"intermediate_size": 4096,
|
| 48 |
+
"hidden_act": "silu",
|
| 49 |
+
"hidden_dropout_prob": 0.0,
|
| 50 |
+
"initializer_range": 0.02,
|
| 51 |
+
"image_size": 336,
|
| 52 |
+
"patch_size": 14,
|
| 53 |
+
"out_hidden_size": 1536,
|
| 54 |
+
"rms_norm_eps": 1e-05,
|
| 55 |
+
"spatial_merge_size": 2,
|
| 56 |
+
"temporal_patch_size": 2
|
| 57 |
+
},
|
| 58 |
+
"image_start_token_id": 59256,
|
| 59 |
+
"image_end_token_id": 59257,
|
| 60 |
+
"video_start_token_id": 59258,
|
| 61 |
+
"video_end_token_id": 59259,
|
| 62 |
+
"image_token_id": 59280,
|
| 63 |
+
"video_token_id": 59281,
|
| 64 |
+
"transformers_version": "5.0.1dev0"
|
| 65 |
+
}
|
configuration.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"framework":"Pytorch","task":"image-text-to-text"}
|
generation_config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"do_sample": false,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
59246,
|
| 6 |
+
59253
|
| 7 |
+
],
|
| 8 |
+
"pad_token_id": 59246,
|
| 9 |
+
"transformers_version": "5.0.1dev0"
|
| 10 |
+
}
|
preprocessor_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"size": {"shortest_edge": 12544, "longest_edge": 9633792},
|
| 3 |
+
"do_rescale": true,
|
| 4 |
+
"patch_size": 14,
|
| 5 |
+
"temporal_patch_size": 2,
|
| 6 |
+
"merge_size": 2,
|
| 7 |
+
"image_mean": [0.48145466, 0.4578275, 0.40821073],
|
| 8 |
+
"image_std": [0.26862954, 0.26130258, 0.27577711],
|
| 9 |
+
"image_processor_type": "Glm46VImageProcessor",
|
| 10 |
+
"processor_class": "Glm46VProcessor"
|
| 11 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"backend": "tokenizers",
|
| 3 |
+
"clean_up_tokenization_spaces": false,
|
| 4 |
+
"eos_token": "<|endoftext|>",
|
| 5 |
+
"extra_special_tokens": [
|
| 6 |
+
"<|endoftext|>",
|
| 7 |
+
"[MASK]",
|
| 8 |
+
"[gMASK]",
|
| 9 |
+
"[sMASK]",
|
| 10 |
+
"<sop>",
|
| 11 |
+
"<eop>",
|
| 12 |
+
"<|system|>",
|
| 13 |
+
"<|user|>",
|
| 14 |
+
"<|assistant|>",
|
| 15 |
+
"<|observation|>",
|
| 16 |
+
"<|begin_of_image|>",
|
| 17 |
+
"<|end_of_image|>",
|
| 18 |
+
"<|begin_of_video|>",
|
| 19 |
+
"<|end_of_video|>",
|
| 20 |
+
"<|begin_of_audio|>",
|
| 21 |
+
"<|end_of_audio|>",
|
| 22 |
+
"<|begin_of_transcription|>",
|
| 23 |
+
"<|end_of_transcription|>",
|
| 24 |
+
"<|code_prefix|>",
|
| 25 |
+
"<|code_middle|>",
|
| 26 |
+
"<|code_suffix|>",
|
| 27 |
+
"<think>",
|
| 28 |
+
"</think>",
|
| 29 |
+
"<tool_call>",
|
| 30 |
+
"</tool_call>",
|
| 31 |
+
"<tool_response>",
|
| 32 |
+
"</tool_response>",
|
| 33 |
+
"<arg_key>",
|
| 34 |
+
"</arg_key>",
|
| 35 |
+
"<arg_value>",
|
| 36 |
+
"</arg_value>",
|
| 37 |
+
"/nothink",
|
| 38 |
+
"<|begin_of_box|>",
|
| 39 |
+
"<|end_of_box|>",
|
| 40 |
+
"<|image|>",
|
| 41 |
+
"<|video|>"
|
| 42 |
+
],
|
| 43 |
+
"is_local": true,
|
| 44 |
+
"model_max_length": 655380,
|
| 45 |
+
"pad_token": "<|endoftext|>",
|
| 46 |
+
"padding_side": "left",
|
| 47 |
+
"processor_class": "Glm46VProcessor",
|
| 48 |
+
"tokenizer_class": "TokenizersBackend"
|
| 49 |
+
}
|