davanstrien HF Staff commited on
Commit
a017767
·
verified ·
1 Parent(s): ae46b2b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +22 -37
README.md CHANGED
@@ -2,22 +2,23 @@
2
  tags:
3
  - ocr
4
  - document-processing
5
- - lighton-ocr-2
 
6
  - markdown
7
  - uv-script
8
  - generated
9
  ---
10
 
11
- # Document OCR using LightOnOCR-2-1B
12
 
13
- This dataset contains OCR results from images in [biglam/rubenstein-manuscript-catalog](https://huggingface.co/datasets/biglam/rubenstein-manuscript-catalog) using LightOnOCR-2, a fast and compact 1B OCR model trained with RLVR.
14
 
15
  ## Processing Details
16
 
17
  - **Source Dataset**: [biglam/rubenstein-manuscript-catalog](https://huggingface.co/datasets/biglam/rubenstein-manuscript-catalog)
18
- - **Model**: [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B)
19
  - **Number of Samples**: 50
20
- - **Processing Time**: 6.4 min
21
  - **Processing Date**: 2026-02-15 00:39 UTC
22
 
23
  ### Configuration
@@ -26,38 +27,24 @@ This dataset contains OCR results from images in [biglam/rubenstein-manuscript-c
26
  - **Output Column**: `markdown`
27
  - **Dataset Split**: `train`
28
  - **Batch Size**: 16
29
- - **Target Image Size**: 1540px (longest dimension)
30
  - **Max Model Length**: 8,192 tokens
31
- - **Max Output Tokens**: 4,096
32
- - **Temperature**: 0.2
33
- - **Top P**: 0.9
34
  - **GPU Memory Utilization**: 80.0%
35
 
36
  ## Model Information
37
 
38
- LightOnOCR-2 is a next-generation fast, compact OCR model that excels at:
39
- - **Fastest Speed** - 42.8 pages/second on H100 GPU (7× faster than v1)
40
- - 🎯 **High Accuracy** - 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)
41
- - 🧠 **RLVR Training** - Eliminates repetition loops and formatting errors
42
- - 📚 **Better Dataset** - 2.5× larger training data with cleaner annotations
43
- - 📐 **LaTeX formulas** - Mathematical notation in LaTeX format
44
- - 📊 **Tables** - Extracted and formatted as markdown
45
- - 📝 **Document structure** - Hierarchy and layout preservation
46
- - 🌍 **Multilingual** - Optimized for European languages
47
- - 💪 **Production-ready** - Outperforms models 9× larger
48
-
49
- ### Key Improvements over v1
50
-
51
- - **7.5× faster**: 42.8 vs 5.71 pages/sec on H100
52
- - **+7.1% accuracy**: 83.2% vs 76.1% on benchmarks
53
- - **Better quality**: RLVR training eliminates common OCR errors
54
- - **Cleaner output**: No repetition loops or formatting glitches
55
- - **Simpler**: Single model (no vocabulary variants)
56
 
57
  ## Dataset Structure
58
 
59
  The dataset contains all original columns plus:
60
- - `markdown`: The extracted text in markdown format with LaTeX formulas
61
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
62
 
63
  ## Usage
@@ -82,20 +69,18 @@ for info in inference_info:
82
 
83
  ## Reproduction
84
 
85
- This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) LightOnOCR-2 script:
86
 
87
  ```bash
88
- uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
89
  biglam/rubenstein-manuscript-catalog \
90
  <output-dataset> \
91
  --image-column image \
92
- --batch-size 16
 
 
 
 
93
  ```
94
 
95
- ## Performance
96
-
97
- - **Processing Speed**: ~0.13 images/second
98
- - **Benchmark Score**: 83.2 ± 0.9% on OlmOCR-Bench
99
- - **Training**: RLVR (Reinforcement Learning with Verifiable Rewards)
100
-
101
  Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
 
2
  tags:
3
  - ocr
4
  - document-processing
5
+ - dots-ocr
6
+ - multilingual
7
  - markdown
8
  - uv-script
9
  - generated
10
  ---
11
 
12
+ # Document OCR using dots.ocr
13
 
14
+ This dataset contains OCR results from images in [biglam/rubenstein-manuscript-catalog](https://huggingface.co/datasets/biglam/rubenstein-manuscript-catalog) using DoTS.ocr, a compact 1.7B multilingual model.
15
 
16
  ## Processing Details
17
 
18
  - **Source Dataset**: [biglam/rubenstein-manuscript-catalog](https://huggingface.co/datasets/biglam/rubenstein-manuscript-catalog)
19
+ - **Model**: [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
20
  - **Number of Samples**: 50
21
+ - **Processing Time**: 5.3 min
22
  - **Processing Date**: 2026-02-15 00:39 UTC
23
 
24
  ### Configuration
 
27
  - **Output Column**: `markdown`
28
  - **Dataset Split**: `train`
29
  - **Batch Size**: 16
30
+ - **Prompt Mode**: ocr
31
  - **Max Model Length**: 8,192 tokens
32
+ - **Max Output Tokens**: 8,192
 
 
33
  - **GPU Memory Utilization**: 80.0%
34
 
35
  ## Model Information
36
 
37
+ DoTS.ocr is a compact multilingual document parsing model that excels at:
38
+ - 🌍 **100+ Languages** - Multilingual document support
39
+ - 📊 **Table extraction** - Structured data recognition
40
+ - 📐 **Formulas** - Mathematical notation preservation
41
+ - 📝 **Layout-aware** - Reading order and structure preservation
42
+ - 🎯 **Compact** - Only 1.7B parameters
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ## Dataset Structure
45
 
46
  The dataset contains all original columns plus:
47
+ - `markdown`: The extracted text in markdown format
48
  - `inference_info`: JSON list tracking all OCR models applied to this dataset
49
 
50
  ## Usage
 
69
 
70
  ## Reproduction
71
 
72
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR script:
73
 
74
  ```bash
75
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
76
  biglam/rubenstein-manuscript-catalog \
77
  <output-dataset> \
78
  --image-column image \
79
+ --batch-size 16 \
80
+ --prompt-mode ocr \
81
+ --max-model-len 8192 \
82
+ --max-tokens 8192 \
83
+ --gpu-memory-utilization 0.8
84
  ```
85
 
 
 
 
 
 
 
86
  Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)