| --- |
| configs: |
| - config_name: default |
| data_files: |
| - split: stage1 |
| path: "data/stage1-*.parquet" |
| - split: stage2 |
| path: "data/stage2-*.parquet" |
| - split: validation |
| path: "data/val-*.parquet" |
| - split: test |
| path: "data/test-*.parquet" |
| task_categories: |
| - image-to-text |
| language: |
| - en |
| tags: |
| - latex |
| - ocr |
| - math |
| - formula-recognition |
| license: cc-by-4.0 |
| --- |
| |
| # LaTeX OCR Dataset |
|
|
| A dataset for training LaTeX OCR models that convert images of mathematical formulas into LaTeX source code. |
| Built by merging and re-splitting multiple public sources, then applying two levels of augmentation for two-stage training. |
|
|
| --- |
|
|
| ## Dataset Summary |
|
|
| | Property | Value | |
| |---|---| |
| | Total unique samples | ~732,952 | |
| | Train (stage1) | 659,658 | |
| | Train (stage2) | 659,658 | |
| | Validation | 36,647 | |
| | Test | 36,647 | |
| | Image height | 64 px (fixed) | |
| | Image width | 16 – 672 px (variable, aligned to 16px) | |
| | Label format | Raw LaTeX string | |
| | Max token length | 200 tokens | |
|
|
| --- |
|
|
| ## Sources |
|
|
| All data is merged from the following public datasets, filtered, shuffled, then re-split 90/5/5: |
|
|
| | Dataset | Config | Splits used | |
| |---|---|---| |
| | [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `full` | train + validation + test | |
| | [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `synthetic_handwrite` | train + validation + test | |
| | [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) | `human_handwrite` | train + validation + test | |
| | [OleehyO/latex-formulas](https://huggingface.co/datasets/OleehyO/latex-formulas) | `cleaned_formulas` | train | |
|
|
| **Filtering:** samples with fewer than 2 or more than 200 LaTeX tokens are removed. |
|
|
| --- |
|
|
| ## Splits |
|
|
| ### `stage1` — Light augmentation (for Stage 1 training: encoder warm-up) |
|
|
| Same 659,658 training samples as `stage2`, but with lighter augmentation. |
| Each image is independently re-augmented, so `stage1` and `stage2` are **not identical**. |
|
|
| Augmentations applied (each with independent probability): |
|
|
| | Augmentation | Probability | Parameters | |
| |---|---|---| |
| | Gaussian blur | 0.30 | radius 0.3 – 1.2 | |
| | Rotation | 0.30 | –3° to +3° | |
| | Background color blend | 0.40 | random warm background | |
| | Edge shadow | 0.20 | left / right / top / bottom | |
| | Low resolution | 0.20 | downscale 20–60% then upscale | |
|
|
| 35% of samples are kept clean (no augmentation applied). |
|
|
| --- |
|
|
| ### `stage2` — Heavy augmentation (for Stage 2 training: LoRA fine-tuning) |
|
|
| Same training indices as `stage1`, re-augmented with a heavier pipeline. |
|
|
| Augmentations applied (each with independent probability): |
|
|
| | Augmentation | Probability | Parameters | |
| |---|---|---| |
| | JPEG compression | 0.40 | quality 30 – 75 | |
| | Low resolution | 0.40 | downscale 20–60% then upscale | |
| | Gaussian noise | 0.35 | std 5 – 25 | |
| | Salt & pepper noise | 0.20 | amount 1 – 5% | |
| | Gaussian blur | 0.30 | radius 0.3 – 1.2 | |
| | Background color blend | 0.40 | random warm background | |
| | Rotation | 0.35 | –3° to +3° | |
| | Perspective distortion | 0.25 | deviation 2 – 6% | |
| | Random erase | 0.30 | 1–3 rectangles, white/black/gray fill | |
| | Edge shadow | 0.25 | left / right / top / bottom | |
|
|
| 35% of samples are kept clean (no augmentation applied). |
|
|
| --- |
|
|
| ### `validation` and `test` |
|
|
| Drawn from the same shuffled pool (5% each). No augmentation — images are only resized to 64px height. |
|
|
| --- |
|
|
| ## Data Format |
|
|
| Each row contains two columns: |
|
|
| | Column | Type | Description | |
| |---|---|---| |
| | `image` | `PIL.Image` (JPEG) | Formula image, height=64px, width≤672px | |
| | `label` | `str` | Ground-truth LaTeX string | |
|
|
| **Image preprocessing:** |
| - Convert to RGB |
| - Resize to height=64px (width scaled proportionally) |
| - Width clamped to 672px maximum |
| - Width aligned to nearest multiple of 16px (patch size) |
|
|
| **LaTeX tokenization** (for reference, not stored): |
| ```python |
| import re |
| tokens = re.findall(r"\\[a-zA-Z]+|[^\s]", label) |
| ``` |
|
|
| --- |
|
|
| ## Usage |
|
|
| ### Load the full dataset |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("harryrobert/latex-ocr") |
| print(ds) |
| # DatasetDict({ |
| # stage1: Dataset({features: ['image', 'label'], num_rows: 659658}), |
| # stage2: Dataset({features: ['image', 'label'], num_rows: 659658}), |
| # validation: Dataset({features: ['image', 'label'], num_rows: 36647}), |
| # test: Dataset({features: ['image', 'label'], num_rows: 36647}) |
| # }) |
| ``` |
|
|
| ### Load a specific split |
|
|
| ```python |
| train_stage1 = load_dataset("harryrobert/latex-ocr", split="stage1") |
| val = load_dataset("harryrobert/latex-ocr", split="validation") |
| |
| sample = train_stage1[0] |
| print(sample["label"]) # e.g. '\frac{1}{2}' |
| sample["image"].show() # PIL Image |
| ``` |
|
|
| ### Streaming (large splits) |
|
|
| ```python |
| ds = load_dataset("harryrobert/latex-ocr", split="stage1", streaming=True) |
| for sample in ds.take(5): |
| print(sample["label"]) |
| ``` |
|
|
| ### PyTorch DataLoader integration |
|
|
| ```python |
| from datasets import load_dataset |
| from torch.utils.data import DataLoader |
| |
| ds = load_dataset("harryrobert/latex-ocr", split="stage1") |
| ds = ds.with_format("torch") |
| |
| loader = DataLoader(ds, batch_size=32, shuffle=True) |
| ``` |
|
|
| --- |
|
|
| ## Training Recipe |
|
|
| This dataset is designed for a two-stage training pipeline: |
|
|
| **Stage 1** — Train visual encoder, freeze language model decoder: |
| ``` |
| train on split="stage1" |
| evaluate on split="validation" |
| ``` |
|
|
| **Stage 2** — LoRA fine-tuning of the full model: |
| ``` |
| train on split="stage2" |
| evaluate on split="validation" |
| ``` |
|
|
| **Final evaluation:** |
| ``` |
| evaluate on split="test" |
| ``` |
|
|
| --- |
|
|
| ## License |
|
|
| Dataset contents are derived from [linxy/LaTeX_OCR](https://huggingface.co/datasets/linxy/LaTeX_OCR) |
| and [OleehyO/latex-formulas](https://huggingface.co/datasets/OleehyO/latex-formulas). |
| Please refer to the original datasets for their respective licenses. |
|
|