| --- |
| tags: |
| - ocr |
| - document-processing |
| - glm-ocr |
| - markdown |
| - uv-script |
| - generated |
| --- |
| |
| # Document OCR using GLM-OCR |
|
|
| This dataset contains OCR results from images in [yhan86/mis1997image](https://huggingface.co/datasets/yhan86/mis1997image) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance. |
|
|
| ## Processing Details |
|
|
| - **Source Dataset**: [yhan86/mis1997image](https://huggingface.co/datasets/yhan86/mis1997image) |
| - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) |
| - **Task**: text recognition |
| - **Number of Samples**: 13 |
| - **Processing Time**: 1.7 min |
| - **Processing Date**: 2026-04-02 18:31 UTC |
|
|
| ### Configuration |
|
|
| - **Image Column**: `image` |
| - **Output Column**: `markdown` |
| - **Dataset Split**: `train` |
| - **Batch Size**: 16 |
| - **Max Model Length**: 8,192 tokens |
| - **Max Output Tokens**: 8,192 |
| - **Temperature**: 0.01 |
| - **Top P**: 1e-05 |
| - **GPU Memory Utilization**: 80.0% |
|
|
| ## Model Information |
|
|
| GLM-OCR is a compact, high-performance OCR model: |
| - 0.9B parameters |
| - 94.62% on OmniDocBench V1.5 |
| - CogViT visual encoder + GLM-0.5B language decoder |
| - Multi-Token Prediction (MTP) loss for efficiency |
| - Multilingual: zh, en, fr, es, ru, de, ja, ko |
| - MIT licensed |
|
|
| ## Dataset Structure |
|
|
| The dataset contains all original columns plus: |
| - `markdown`: The extracted text in markdown format |
| - `inference_info`: JSON list tracking all OCR models applied to this dataset |
|
|
| ## Reproduction |
|
|
| ```bash |
| uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr-v2.py \ |
| yhan86/mis1997image \ |
| <output-dataset> \ |
| --image-column image \ |
| --batch-size 16 \ |
| --task ocr |
| ``` |
|
|
| Generated with [UV Scripts](https://huggingface.co/uv-scripts) (glm-ocr-v2.py) |
|
|