| | --- |
| | tags: |
| | - ocr |
| | - document-processing |
| | - glm-ocr |
| | - markdown |
| | - uv-script |
| | - generated |
| | --- |
| | |
| | # Document OCR using GLM-OCR |
| |
|
| | This dataset contains OCR results from images in [minhpvo/ocr-input](https://huggingface.co/datasets/minhpvo/ocr-input) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance. |
| |
|
| | ## Processing Details |
| |
|
| | - **Source Dataset**: [minhpvo/ocr-input](https://huggingface.co/datasets/minhpvo/ocr-input) |
| | - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) |
| | - **Task**: text recognition |
| | - **Number of Samples**: 13 |
| | - **Processing Time**: 2.2 min |
| | - **Processing Date**: 2026-02-09 04:12 UTC |
| |
|
| | ### Configuration |
| |
|
| | - **Image Column**: `image` |
| | - **Output Column**: `markdown` |
| | - **Dataset Split**: `train` |
| | - **Batch Size**: 16 |
| | - **Max Model Length**: 8,192 tokens |
| | - **Max Output Tokens**: 16,384 |
| | - **Temperature**: 0.01 |
| | - **Top P**: 1e-05 |
| | - **GPU Memory Utilization**: 80.0% |
| |
|
| | ## Model Information |
| |
|
| | GLM-OCR is a compact, high-performance OCR model: |
| | - 0.9B parameters |
| | - 94.62% on OmniDocBench V1.5 |
| | - CogViT visual encoder + GLM-0.5B language decoder |
| | - Multi-Token Prediction (MTP) loss for efficiency |
| | - Multilingual: zh, en, fr, es, ru, de, ja, ko |
| | - MIT licensed |
| |
|
| | ## Dataset Structure |
| |
|
| | The dataset contains all original columns plus: |
| | - `markdown`: The extracted text in markdown format |
| | - `inference_info`: JSON list tracking all OCR models applied to this dataset |
| |
|
| | ## Reproduction |
| |
|
| | ```bash |
| | uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \ |
| | minhpvo/ocr-input \ |
| | <output-dataset> \ |
| | --image-column image \ |
| | --batch-size 16 \ |
| | --task ocr |
| | ``` |
| |
|
| | Generated with [UV Scripts](https://huggingface.co/uv-scripts) |
| |
|