davanstrien HF Staff
Add rednote-hilab/dots.ocr OCR results (50 samples) [dots-ocr]
ca0c348 verified metadata
tags:
- ocr
- document-processing
- glm-ocr
- markdown
- uv-script
- generated
configs:
- config_name: dots-ocr
data_files:
- split: train
path: dots-ocr/train-*
dataset_info:
config_name: dots-ocr
features:
- name: image
dtype: image
- name: drawer_id
dtype: string
- name: card_number
dtype: int64
- name: filename
dtype: string
- name: text
dtype: string
- name: has_ocr
dtype: bool
- name: source
dtype: string
- name: source_url
dtype: string
- name: ia_collection
dtype: string
- name: markdown
dtype: string
- name: inference_info
dtype: string
splits:
- name: train
num_bytes: 4866518
num_examples: 50
download_size: 4853285
dataset_size: 4866518
Document OCR using GLM-OCR
This dataset contains OCR results from images in biglam/bpl-card-catalog using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
Processing Details
- Source Dataset: biglam/bpl-card-catalog
- Model: zai-org/GLM-OCR
- Task: text recognition
- Number of Samples: 50
- Processing Time: 21.2 min
- Processing Date: 2026-02-22 15:39 UTC
Configuration
- Image Column:
image - Output Column:
markdown - Dataset Split:
train - Batch Size: 16
- Max Model Length: 8,192 tokens
- Max Output Tokens: 8,192
- Temperature: 0.01
- Top P: 1e-05
- GPU Memory Utilization: 80.0%
Model Information
GLM-OCR is a compact, high-performance OCR model:
- 0.9B parameters
- 94.62% on OmniDocBench V1.5
- CogViT visual encoder + GLM-0.5B language decoder
- Multi-Token Prediction (MTP) loss for efficiency
- Multilingual: zh, en, fr, es, ru, de, ja, ko
- MIT licensed
Dataset Structure
The dataset contains all original columns plus:
markdown: The extracted text in markdown formatinference_info: JSON list tracking all OCR models applied to this dataset
Reproduction
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
biglam/bpl-card-catalog \
<output-dataset> \
--image-column image \
--batch-size 16 \
--task ocr
Generated with UV Scripts