technocreep commited on
Commit
7af3d8b
·
verified ·
1 Parent(s): 4d5dfaa

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +22 -81
README.md CHANGED
@@ -2,44 +2,24 @@
2
  tags:
3
  - ocr
4
  - document-processing
5
- - lighton-ocr-2
6
  - markdown
7
  - uv-script
8
  - generated
9
- configs:
10
- - config_name: default
11
- data_files:
12
- - split: train
13
- path: data/train-*
14
- dataset_info:
15
- features:
16
- - name: image
17
- dtype: image
18
- - name: text
19
- dtype: string
20
- - name: markdown
21
- dtype: string
22
- - name: inference_info
23
- dtype: string
24
- splits:
25
- - name: train
26
- num_bytes: 7934646.0
27
- num_examples: 50
28
- download_size: 7922795
29
- dataset_size: 7934646.0
30
  ---
31
 
32
- # Document OCR using LightOnOCR-2-1B
33
 
34
- This dataset contains OCR results from images in [technocreep/ussr_typewriter](https://huggingface.co/datasets/technocreep/ussr_typewriter) using LightOnOCR-2, a fast and compact 1B OCR model trained with RLVR.
35
 
36
  ## Processing Details
37
 
38
  - **Source Dataset**: [technocreep/ussr_typewriter](https://huggingface.co/datasets/technocreep/ussr_typewriter)
39
- - **Model**: [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B)
 
40
  - **Number of Samples**: 50
41
- - **Processing Time**: 2.0 min
42
- - **Processing Date**: 2026-02-25 11:55 UTC
43
 
44
  ### Configuration
45
 
@@ -47,76 +27,37 @@ This dataset contains OCR results from images in [technocreep/ussr_typewriter](h
47
  - **Output Column**: `markdown`
48
  - **Dataset Split**: `train`
49
  - **Batch Size**: 16
50
- - **Target Image Size**: 1540px (longest dimension)
51
  - **Max Model Length**: 8,192 tokens
52
- - **Max Output Tokens**: 4,096
53
- - **Temperature**: 0.2
54
- - **Top P**: 0.9
55
  - **GPU Memory Utilization**: 80.0%
56
 
57
  ## Model Information
58
 
59
- LightOnOCR-2 is a next-generation fast, compact OCR model that excels at:
60
- - ⚡ **Fastest Speed** - 42.8 pages/second on H100 GPU (7× faster than v1)
61
- - 🎯 **High Accuracy** - 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)
62
- - 🧠 **RLVR Training** - Eliminates repetition loops and formatting errors
63
- - 📚 **Better Dataset** - 2.5× larger training data with cleaner annotations
64
- - 📐 **LaTeX formulas** - Mathematical notation in LaTeX format
65
- - 📊 **Tables** - Extracted and formatted as markdown
66
- - 📝 **Document structure** - Hierarchy and layout preservation
67
- - 🌍 **Multilingual** - Optimized for European languages
68
- - 💪 **Production-ready** - Outperforms models 9× larger
69
-
70
- ### Key Improvements over v1
71
-
72
- - **7.5× faster**: 42.8 vs 5.71 pages/sec on H100
73
- - **+7.1% accuracy**: 83.2% vs 76.1% on benchmarks
74
- - **Better quality**: RLVR training eliminates common OCR errors
75
- - **Cleaner output**: No repetition loops or formatting glitches
76
- - **Simpler**: Single model (no vocabulary variants)
77
 
78
  ## Dataset Structure
79
 
80
  The dataset contains all original columns plus:
81
- - `markdown`: The extracted text in markdown format with LaTeX formulas
82
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
83
 
84
- ## Usage
85
-
86
- ```python
87
- from datasets import load_dataset
88
- import json
89
-
90
- # Load the dataset
91
- dataset = load_dataset("{output_dataset_id}", split="train")
92
-
93
- # Access the markdown text
94
- for example in dataset:
95
- print(example["markdown"])
96
- break
97
-
98
- # View all OCR models applied to this dataset
99
- inference_info = json.loads(dataset[0]["inference_info"])
100
- for info in inference_info:
101
- print(f"Column: {info['column_name']} - Model: {info['model_id']}")
102
- ```
103
-
104
  ## Reproduction
105
 
106
- This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) LightOnOCR-2 script:
107
-
108
  ```bash
109
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
110
  technocreep/ussr_typewriter \
111
  <output-dataset> \
112
  --image-column image \
113
- --batch-size 16
 
114
  ```
115
 
116
- ## Performance
117
-
118
- - **Processing Speed**: ~0.42 images/second
119
- - **Benchmark Score**: 83.2 ± 0.9% on OlmOCR-Bench
120
- - **Training**: RLVR (Reinforcement Learning with Verifiable Rewards)
121
-
122
- Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
 
2
  tags:
3
  - ocr
4
  - document-processing
5
+ - glm-ocr
6
  - markdown
7
  - uv-script
8
  - generated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
+ # Document OCR using GLM-OCR
12
 
13
+ This dataset contains OCR results from images in [technocreep/ussr_typewriter](https://huggingface.co/datasets/technocreep/ussr_typewriter) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
14
 
15
  ## Processing Details
16
 
17
  - **Source Dataset**: [technocreep/ussr_typewriter](https://huggingface.co/datasets/technocreep/ussr_typewriter)
18
+ - **Model**: [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR)
19
+ - **Task**: text recognition
20
  - **Number of Samples**: 50
21
+ - **Processing Time**: 1.5 min
22
+ - **Processing Date**: 2026-02-25 12:54 UTC
23
 
24
  ### Configuration
25
 
 
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
  - **Batch Size**: 16
 
30
  - **Max Model Length**: 8,192 tokens
31
+ - **Max Output Tokens**: 8,192
32
+ - **Temperature**: 0.01
33
+ - **Top P**: 1e-05
34
  - **GPU Memory Utilization**: 80.0%
35
 
36
  ## Model Information
37
 
38
+ GLM-OCR is a compact, high-performance OCR model:
39
+ - 0.9B parameters
40
+ - 94.62% on OmniDocBench V1.5
41
+ - CogViT visual encoder + GLM-0.5B language decoder
42
+ - Multi-Token Prediction (MTP) loss for efficiency
43
+ - Multilingual: zh, en, fr, es, ru, de, ja, ko
44
+ - MIT licensed
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## Dataset Structure
47
 
48
  The dataset contains all original columns plus:
49
+ - `markdown`: The extracted text in markdown format
50
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ## Reproduction
53
 
 
 
54
  ```bash
55
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
56
  technocreep/ussr_typewriter \
57
  <output-dataset> \
58
  --image-column image \
59
+ --batch-size 16 \
60
+ --task ocr
61
  ```
62
 
63
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts)