davanstrien HF Staff commited on
Commit
9a42d22
·
verified ·
1 Parent(s): c1a9c1d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -31
README.md CHANGED
@@ -1,33 +1,63 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: document_id
5
- dtype: string
6
- - name: page_number
7
- dtype: string
8
- - name: image
9
- dtype: image
10
- - name: text
11
- dtype: string
12
- - name: alto_xml
13
- dtype: string
14
- - name: has_image
15
- dtype: bool
16
- - name: has_alto
17
- dtype: bool
18
- - name: markdown
19
- dtype: string
20
- - name: inference_info
21
- dtype: string
22
- splits:
23
- - name: train
24
- num_bytes: 2062901.0
25
- num_examples: 10
26
- download_size: 1582808
27
- dataset_size: 2062901.0
28
- configs:
29
- - config_name: default
30
- data_files:
31
- - split: train
32
- path: data/train-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - ocr
4
+ - document-processing
5
+ - glm-ocr
6
+ - markdown
7
+ - uv-script
8
+ - generated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
+
11
+ # Document OCR using GLM-OCR
12
+
13
+ This dataset contains OCR results from images in [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
14
+
15
+ ## Processing Details
16
+
17
+ - **Source Dataset**: [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset)
18
+ - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR)
19
+ - **Task**: text recognition
20
+ - **Number of Samples**: 10
21
+ - **Processing Time**: 4.1 min
22
+ - **Processing Date**: 2026-02-05 14:42 UTC
23
+
24
+ ### Configuration
25
+
26
+ - **Image Column**: `image`
27
+ - **Output Column**: `markdown`
28
+ - **Dataset Split**: `train`
29
+ - **Batch Size**: 16
30
+ - **Max Model Length**: 8,192 tokens
31
+ - **Max Output Tokens**: 16,384
32
+ - **Temperature**: 0.01
33
+ - **Top P**: 1e-05
34
+ - **GPU Memory Utilization**: 80.0%
35
+
36
+ ## Model Information
37
+
38
+ GLM-OCR is a compact, high-performance OCR model:
39
+ - 0.9B parameters
40
+ - 94.62% on OmniDocBench V1.5
41
+ - CogViT visual encoder + GLM-0.5B language decoder
42
+ - Multi-Token Prediction (MTP) loss for efficiency
43
+ - Multilingual: zh, en, fr, es, ru, de, ja, ko
44
+ - MIT licensed
45
+
46
+ ## Dataset Structure
47
+
48
+ The dataset contains all original columns plus:
49
+ - `markdown`: The extracted text in markdown format
50
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
51
+
52
+ ## Reproduction
53
+
54
+ ```bash
55
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
56
+ NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \
57
+ <output-dataset> \
58
+ --image-column image \
59
+ --batch-size 16 \
60
+ --task ocr
61
+ ```
62
+
63
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts)