File size: 5,763 Bytes
90c8c11 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | ---
configs:
- config_name: default
data_files:
- split: stage1
path: "data/stage1-*.parquet"
- split: stage2
path: "data/stage2-*.parquet"
- split: validation
path: "data/val-*.parquet"
- split: test
path: "data/test-*.parquet"
task_categories:
- image-to-text
language:
- en
tags:
- latex
- ocr
- math
- formula-recognition
license: cc-by-4.0
---
# LaTeX OCR Dataset
A dataset for training LaTeX OCR models that convert images of mathematical formulas into LaTeX source code.
Built by merging and re-splitting multiple public sources, then applying two levels of augmentation for two-stage training.
---
## Dataset Summary
| Property | Value |
|---|---|
| Total unique samples | ~732,952 |
| Train (stage1) | 659,658 |
| Train (stage2) | 659,658 |
| Validation | 36,647 |
| Test | 36,647 |
| Image height | 64 px (fixed) |
| Image width | 16 – 672 px (variable, aligned to 16px) |
| Label format | Raw LaTeX string |
| Max token length | 200 tokens |
---
## Sources
All data is merged from the following public datasets, filtered, shuffled, then re-split 90/5/5:
| Dataset | Config | Splits used |
|---|---|---|
| [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `full` | train + validation + test |
| [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `synthetic_handwrite` | train + validation + test |
| [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `human_handwrite` | train + validation + test |
| [OleehyO/latex-formulas](https://huggingface.co/datasets/OleehyO/latex-formulas) | `cleaned_formulas` | train |
**Filtering:** samples with fewer than 2 or more than 200 LaTeX tokens are removed.
---
## Splits
### `stage1` — Light augmentation (for Stage 1 training: encoder warm-up)
Same 659,658 training samples as `stage2`, but with lighter augmentation.
Each image is independently re-augmented, so `stage1` and `stage2` are **not identical**.
Augmentations applied (each with independent probability):
| Augmentation | Probability | Parameters |
|---|---|---|
| Gaussian blur | 0.30 | radius 0.3 – 1.2 |
| Rotation | 0.30 | –3° to +3° |
| Background color blend | 0.40 | random warm background |
| Edge shadow | 0.20 | left / right / top / bottom |
| Low resolution | 0.20 | downscale 20–60% then upscale |
35% of samples are kept clean (no augmentation applied).
---
### `stage2` — Heavy augmentation (for Stage 2 training: LoRA fine-tuning)
Same training indices as `stage1`, re-augmented with a heavier pipeline.
Augmentations applied (each with independent probability):
| Augmentation | Probability | Parameters |
|---|---|---|
| JPEG compression | 0.40 | quality 30 – 75 |
| Low resolution | 0.40 | downscale 20–60% then upscale |
| Gaussian noise | 0.35 | std 5 – 25 |
| Salt & pepper noise | 0.20 | amount 1 – 5% |
| Gaussian blur | 0.30 | radius 0.3 – 1.2 |
| Background color blend | 0.40 | random warm background |
| Rotation | 0.35 | –3° to +3° |
| Perspective distortion | 0.25 | deviation 2 – 6% |
| Random erase | 0.30 | 1–3 rectangles, white/black/gray fill |
| Edge shadow | 0.25 | left / right / top / bottom |
35% of samples are kept clean (no augmentation applied).
---
### `validation` and `test`
Drawn from the same shuffled pool (5% each). No augmentation — images are only resized to 64px height.
---
## Data Format
Each row contains two columns:
| Column | Type | Description |
|---|---|---|
| `image` | `PIL.Image` (JPEG) | Formula image, height=64px, width≤672px |
| `label` | `str` | Ground-truth LaTeX string |
**Image preprocessing:**
- Convert to RGB
- Resize to height=64px (width scaled proportionally)
- Width clamped to 672px maximum
- Width aligned to nearest multiple of 16px (patch size)
**LaTeX tokenization** (for reference, not stored):
```python
import re
tokens = re.findall(r"\\[a-zA-Z]+|[^\s]", label)
```
---
## Usage
### Load the full dataset
```python
from datasets import load_dataset
ds = load_dataset("harryrobert/latex-ocr")
print(ds)
# DatasetDict({
# stage1: Dataset({features: ['image', 'label'], num_rows: 659658}),
# stage2: Dataset({features: ['image', 'label'], num_rows: 659658}),
# validation: Dataset({features: ['image', 'label'], num_rows: 36647}),
# test: Dataset({features: ['image', 'label'], num_rows: 36647})
# })
```
### Load a specific split
```python
train_stage1 = load_dataset("harryrobert/latex-ocr", split="stage1")
val = load_dataset("harryrobert/latex-ocr", split="validation")
sample = train_stage1[0]
print(sample["label"]) # e.g. '\frac{1}{2}'
sample["image"].show() # PIL Image
```
### Streaming (large splits)
```python
ds = load_dataset("harryrobert/latex-ocr", split="stage1", streaming=True)
for sample in ds.take(5):
print(sample["label"])
```
### PyTorch DataLoader integration
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset("harryrobert/latex-ocr", split="stage1")
ds = ds.with_format("torch")
loader = DataLoader(ds, batch_size=32, shuffle=True)
```
---
## Training Recipe
This dataset is designed for a two-stage training pipeline:
**Stage 1** — Train visual encoder, freeze language model decoder:
```
train on split="stage1"
evaluate on split="validation"
```
**Stage 2** — LoRA fine-tuning of the full model:
```
train on split="stage2"
evaluate on split="validation"
```
**Final evaluation:**
```
evaluate on split="test"
```
---
## License
Dataset contents are derived from [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR)
and [OleehyO/latex-formulas](https://huggingface.co/datasets/OleehyO/latex-formulas).
Please refer to the original datasets for their respective licenses.
|