harryrobert commited on
Commit
90c8c11
·
verified ·
1 Parent(s): e54294a

Add LaTeX OCR dataset (90/5/5 split, 2-stage augmentation)

Browse files
README.md CHANGED
@@ -1,3 +1,213 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: stage1
6
+ path: "data/stage1-*.parquet"
7
+ - split: stage2
8
+ path: "data/stage2-*.parquet"
9
+ - split: validation
10
+ path: "data/val-*.parquet"
11
+ - split: test
12
+ path: "data/test-*.parquet"
13
+ task_categories:
14
+ - image-to-text
15
+ language:
16
+ - en
17
+ tags:
18
+ - latex
19
+ - ocr
20
+ - math
21
+ - formula-recognition
22
+ license: cc-by-4.0
23
+ ---
24
+
25
+ # LaTeX OCR Dataset
26
+
27
+ A dataset for training LaTeX OCR models that convert images of mathematical formulas into LaTeX source code.
28
+ Built by merging and re-splitting multiple public sources, then applying two levels of augmentation for two-stage training.
29
+
30
+ ---
31
+
32
+ ## Dataset Summary
33
+
34
+ | Property | Value |
35
+ |---|---|
36
+ | Total unique samples | ~732,952 |
37
+ | Train (stage1) | 659,658 |
38
+ | Train (stage2) | 659,658 |
39
+ | Validation | 36,647 |
40
+ | Test | 36,647 |
41
+ | Image height | 64 px (fixed) |
42
+ | Image width | 16 – 672 px (variable, aligned to 16px) |
43
+ | Label format | Raw LaTeX string |
44
+ | Max token length | 200 tokens |
45
+
46
+ ---
47
+
48
+ ## Sources
49
+
50
+ All data is merged from the following public datasets, filtered, shuffled, then re-split 90/5/5:
51
+
52
+ | Dataset | Config | Splits used |
53
+ |---|---|---|
54
+ | [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `full` | train + validation + test |
55
+ | [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `synthetic_handwrite` | train + validation + test |
56
+ | [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `human_handwrite` | train + validation + test |
57
+ | [OleehyO/latex-formulas](https://huggingface.co/datasets/OleehyO/latex-formulas) | `cleaned_formulas` | train |
58
+
59
+ **Filtering:** samples with fewer than 2 or more than 200 LaTeX tokens are removed.
60
+
61
+ ---
62
+
63
+ ## Splits
64
+
65
+ ### `stage1` — Light augmentation (for Stage 1 training: encoder warm-up)
66
+
67
+ Same 659,658 training samples as `stage2`, but with lighter augmentation.
68
+ Each image is independently re-augmented, so `stage1` and `stage2` are **not identical**.
69
+
70
+ Augmentations applied (each with independent probability):
71
+
72
+ | Augmentation | Probability | Parameters |
73
+ |---|---|---|
74
+ | Gaussian blur | 0.30 | radius 0.3 – 1.2 |
75
+ | Rotation | 0.30 | –3° to +3° |
76
+ | Background color blend | 0.40 | random warm background |
77
+ | Edge shadow | 0.20 | left / right / top / bottom |
78
+ | Low resolution | 0.20 | downscale 20–60% then upscale |
79
+
80
+ 35% of samples are kept clean (no augmentation applied).
81
+
82
+ ---
83
+
84
+ ### `stage2` — Heavy augmentation (for Stage 2 training: LoRA fine-tuning)
85
+
86
+ Same training indices as `stage1`, re-augmented with a heavier pipeline.
87
+
88
+ Augmentations applied (each with independent probability):
89
+
90
+ | Augmentation | Probability | Parameters |
91
+ |---|---|---|
92
+ | JPEG compression | 0.40 | quality 30 – 75 |
93
+ | Low resolution | 0.40 | downscale 20–60% then upscale |
94
+ | Gaussian noise | 0.35 | std 5 – 25 |
95
+ | Salt & pepper noise | 0.20 | amount 1 – 5% |
96
+ | Gaussian blur | 0.30 | radius 0.3 – 1.2 |
97
+ | Background color blend | 0.40 | random warm background |
98
+ | Rotation | 0.35 | –3° to +3° |
99
+ | Perspective distortion | 0.25 | deviation 2 – 6% |
100
+ | Random erase | 0.30 | 1–3 rectangles, white/black/gray fill |
101
+ | Edge shadow | 0.25 | left / right / top / bottom |
102
+
103
+ 35% of samples are kept clean (no augmentation applied).
104
+
105
+ ---
106
+
107
+ ### `validation` and `test`
108
+
109
+ Drawn from the same shuffled pool (5% each). No augmentation — images are only resized to 64px height.
110
+
111
+ ---
112
+
113
+ ## Data Format
114
+
115
+ Each row contains two columns:
116
+
117
+ | Column | Type | Description |
118
+ |---|---|---|
119
+ | `image` | `PIL.Image` (JPEG) | Formula image, height=64px, width≤672px |
120
+ | `label` | `str` | Ground-truth LaTeX string |
121
+
122
+ **Image preprocessing:**
123
+ - Convert to RGB
124
+ - Resize to height=64px (width scaled proportionally)
125
+ - Width clamped to 672px maximum
126
+ - Width aligned to nearest multiple of 16px (patch size)
127
+
128
+ **LaTeX tokenization** (for reference, not stored):
129
+ ```python
130
+ import re
131
+ tokens = re.findall(r"\\[a-zA-Z]+|[^\s]", label)
132
+ ```
133
+
134
+ ---
135
+
136
+ ## Usage
137
+
138
+ ### Load the full dataset
139
+
140
+ ```python
141
+ from datasets import load_dataset
142
+
143
+ ds = load_dataset("harryrobert/latex-ocr")
144
+ print(ds)
145
+ # DatasetDict({
146
+ # stage1: Dataset({features: ['image', 'label'], num_rows: 659658}),
147
+ # stage2: Dataset({features: ['image', 'label'], num_rows: 659658}),
148
+ # validation: Dataset({features: ['image', 'label'], num_rows: 36647}),
149
+ # test: Dataset({features: ['image', 'label'], num_rows: 36647})
150
+ # })
151
+ ```
152
+
153
+ ### Load a specific split
154
+
155
+ ```python
156
+ train_stage1 = load_dataset("harryrobert/latex-ocr", split="stage1")
157
+ val = load_dataset("harryrobert/latex-ocr", split="validation")
158
+
159
+ sample = train_stage1[0]
160
+ print(sample["label"]) # e.g. '\frac{1}{2}'
161
+ sample["image"].show() # PIL Image
162
+ ```
163
+
164
+ ### Streaming (large splits)
165
+
166
+ ```python
167
+ ds = load_dataset("harryrobert/latex-ocr", split="stage1", streaming=True)
168
+ for sample in ds.take(5):
169
+ print(sample["label"])
170
+ ```
171
+
172
+ ### PyTorch DataLoader integration
173
+
174
+ ```python
175
+ from datasets import load_dataset
176
+ from torch.utils.data import DataLoader
177
+
178
+ ds = load_dataset("harryrobert/latex-ocr", split="stage1")
179
+ ds = ds.with_format("torch")
180
+
181
+ loader = DataLoader(ds, batch_size=32, shuffle=True)
182
+ ```
183
+
184
+ ---
185
+
186
+ ## Training Recipe
187
+
188
+ This dataset is designed for a two-stage training pipeline:
189
+
190
+ **Stage 1** — Train visual encoder, freeze language model decoder:
191
+ ```
192
+ train on split="stage1"
193
+ evaluate on split="validation"
194
+ ```
195
+
196
+ **Stage 2** — LoRA fine-tuning of the full model:
197
+ ```
198
+ train on split="stage2"
199
+ evaluate on split="validation"
200
+ ```
201
+
202
+ **Final evaluation:**
203
+ ```
204
+ evaluate on split="test"
205
+ ```
206
+
207
+ ---
208
+
209
+ ## License
210
+
211
+ Dataset contents are derived from [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR)
212
+ and [OleehyO/latex-formulas](https://huggingface.co/datasets/OleehyO/latex-formulas).
213
+ Please refer to the original datasets for their respective licenses.
data/stage1-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2619688ef2d52d230e2f0ada6d9cd44d2560c89714190f4a058ff86f5d863b8
3
+ size 495714932
data/stage1-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:325fed3bf061dde949e545ed518f15d68eda526a2ee986e4c99a3ced1cad70f6
3
+ size 495139177
data/stage1-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06b486f297d998c4be362086a068b49d6cd2ca5457ad7ae62a6d792734defaa6
3
+ size 495651328
data/stage1-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f18d0480f2b2e8b741a576cf72bbfabe08ee055de4aa11c04a80cf31326ff51
3
+ size 496150187
data/stage1-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a5c2329c057fd67cbc4ee8106d46be47999101ec580cf37cec632574ca4ab35
3
+ size 495731752
data/stage1-00006-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac1d05cb5cb482bca0bab6ca3e0d0e84b3cc92ed2b71fbcda242a84bbb064808
3
+ size 343824876
data/stage2-00001-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aaee59ed8805a2a8ec1c3476848b26b9f6b78fe64b82b2f7994097c717d679b2
3
+ size 514224411
data/stage2-00002-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c22f49d9ca30c27a311875adb5839e5641e328b0efa93995227838a467c4c3ba
3
+ size 514212415
data/stage2-00003-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8aa94512b326fbc4cb6f6f3cfea6b76fe89d4cdfa0b42bea715cd49e58efe85
3
+ size 514334661
data/stage2-00004-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b96f182214ed1de954a8a0b69390b0e75213538ada7ba2f29cc8752a0bd95d4b
3
+ size 514077542
data/stage2-00005-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d67f9c8528a93af41bfe09a4b05e6a404889d3fa602356f3829c43ddae3c1ce
3
+ size 514127630
data/stage2-00006-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ea896c8389eda49da986167c97049296bacb36cb5100c49be6e9fab373b4193
3
+ size 514006590
data/stage2-00007-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a6f00b81af62b31ba919758d6e90a456e24711e31001041a63d4bfd1229d9ea
3
+ size 514489182
data/stage2-00008-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:883779bdac6d8c24a5ccc937033832c905255e9195aa134876af16ad179e2523
3
+ size 175126847
data/test-00001-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67980113f4dd970d5959f4c342c39b54fbf4edbb6302d5e5391ecfd6da82426b
3
+ size 225341502
data/val-00001-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d38e001f1a1a52c4f80425f0b87c30f14700bb4424131c7175c635b6dff9f362
3
+ size 227487352