Update README.md
#25
by
Jared4Real
- opened
README.md
CHANGED
|
@@ -22,6 +22,8 @@ library_name: transformers
|
|
| 22 |
π Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/wechat.jpg" target="_blank">WeChat</a> and <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community
|
| 23 |
<br>
|
| 24 |
π Use GLM-OCR's <a href="https://docs.z.ai/guides/vlm/glm-ocr" target="_blank">API</a>
|
|
|
|
|
|
|
| 25 |
</p>
|
| 26 |
|
| 27 |
|
|
@@ -59,6 +61,13 @@ For speed, we compared different OCR methods under identical hardware and testin
|
|
| 59 |
|
| 60 |
## Usage
|
| 61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
### vLLM
|
| 63 |
|
| 64 |
1. run
|
|
@@ -200,10 +209,6 @@ GLM-OCR currently supports two types of prompt scenarios:
|
|
| 200 |
|
| 201 |
β οΈ Note: When using information extraction, the output must strictly adhere to the defined JSON schema to ensure downstream processing compatibility.
|
| 202 |
|
| 203 |
-
## GLM-OCR SDK
|
| 204 |
-
|
| 205 |
-
We provide an easy-to-use SDK for using GLM-OCR more efficiently and conveniently. please check our [github](https://github.com/zai-org/GLM-OCR) to get more detail.
|
| 206 |
-
|
| 207 |
## Acknowledgement
|
| 208 |
|
| 209 |
This project is inspired by the excellent work of the following projects and communities:
|
|
|
|
| 22 |
π Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/wechat.jpg" target="_blank">WeChat</a> and <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community
|
| 23 |
<br>
|
| 24 |
π Use GLM-OCR's <a href="https://docs.z.ai/guides/vlm/glm-ocr" target="_blank">API</a>
|
| 25 |
+
<br>
|
| 26 |
+
π <a href="https://github.com/zai-org/GLM-OCR" target="_blank">GLM-OCR SDK</a> Recommended
|
| 27 |
</p>
|
| 28 |
|
| 29 |
|
|
|
|
| 61 |
|
| 62 |
## Usage
|
| 63 |
|
| 64 |
+
### Official SDK
|
| 65 |
+
|
| 66 |
+
For document parsing tasks, we strongly recommend using our [official SDK](https://github.com/zai-org/GLM-OCR).
|
| 67 |
+
Compared with model-only inference, the SDK integrates PP-DocLayoutV3 and provides a complete, easy-to-use pipeline for document parsing, including layout analysis and structured output generation. This significantly reduces the engineering overhead required to build end-to-end document intelligence systems.
|
| 68 |
+
|
| 69 |
+
Note that the SDK is currently designed for document parsing tasks only. For information extraction tasks, please refer to the following section and run inference directly with the model.
|
| 70 |
+
|
| 71 |
### vLLM
|
| 72 |
|
| 73 |
1. run
|
|
|
|
| 209 |
|
| 210 |
β οΈ Note: When using information extraction, the output must strictly adhere to the defined JSON schema to ensure downstream processing compatibility.
|
| 211 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 212 |
## Acknowledgement
|
| 213 |
|
| 214 |
This project is inspired by the excellent work of the following projects and communities:
|