davanstrien HF Staff commited on
Commit
26752fb
·
verified ·
1 Parent(s): 276112a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -35
README.md CHANGED
@@ -1,37 +1,63 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: image
5
- dtype: image
6
- - name: drawer_id
7
- dtype: string
8
- - name: card_number
9
- dtype: int64
10
- - name: filename
11
- dtype: string
12
- - name: text
13
- dtype: string
14
- - name: has_ocr
15
- dtype: bool
16
- - name: source
17
- dtype: string
18
- - name: source_url
19
- dtype: string
20
- - name: ia_collection
21
- dtype: string
22
- - name: markdown
23
- dtype: string
24
- - name: inference_info
25
- dtype: string
26
- splits:
27
- - name: train
28
- num_bytes: 14792451103.156
29
- num_examples: 49654
30
- download_size: 14644993566
31
- dataset_size: 14792451103.156
32
- configs:
33
- - config_name: default
34
- data_files:
35
- - split: train
36
- path: data/train-*
37
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - ocr
4
+ - document-processing
5
+ - glm-ocr
6
+ - markdown
7
+ - uv-script
8
+ - generated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
+
11
+ # Document OCR using GLM-OCR
12
+
13
+ This dataset contains OCR results from images in [biglam/rubenstein-manuscript-catalog](https://huggingface.co/datasets/biglam/rubenstein-manuscript-catalog) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
14
+
15
+ ## Processing Details
16
+
17
+ - **Source Dataset**: [biglam/rubenstein-manuscript-catalog](https://huggingface.co/datasets/biglam/rubenstein-manuscript-catalog)
18
+ - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR)
19
+ - **Task**: text recognition
20
+ - **Number of Samples**: 49,654
21
+ - **Processing Time**: 343.2 min
22
+ - **Processing Date**: 2026-02-14 15:38 UTC
23
+
24
+ ### Configuration
25
+
26
+ - **Image Column**: `image`
27
+ - **Output Column**: `markdown`
28
+ - **Dataset Split**: `train`
29
+ - **Batch Size**: 64
30
+ - **Max Model Length**: 8,192 tokens
31
+ - **Max Output Tokens**: 8,192
32
+ - **Temperature**: 0.01
33
+ - **Top P**: 1e-05
34
+ - **GPU Memory Utilization**: 95.0%
35
+
36
+ ## Model Information
37
+
38
+ GLM-OCR is a compact, high-performance OCR model:
39
+ - 0.9B parameters
40
+ - 94.62% on OmniDocBench V1.5
41
+ - CogViT visual encoder + GLM-0.5B language decoder
42
+ - Multi-Token Prediction (MTP) loss for efficiency
43
+ - Multilingual: zh, en, fr, es, ru, de, ja, ko
44
+ - MIT licensed
45
+
46
+ ## Dataset Structure
47
+
48
+ The dataset contains all original columns plus:
49
+ - `markdown`: The extracted text in markdown format
50
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
51
+
52
+ ## Reproduction
53
+
54
+ ```bash
55
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
56
+ biglam/rubenstein-manuscript-catalog \
57
+ <output-dataset> \
58
+ --image-column image \
59
+ --batch-size 64 \
60
+ --task ocr
61
+ ```
62
+
63
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts)