Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError
Exception: ValueError
Message:
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'mlp-train', 'path': 'mlp-train-*.parquet'}, {'split': 'full-train', 'path': 'full-train-*.parquet'}, {'split': 'sft-train', 'path': 'sft-train-*.parquet'}, {'split': 'dev', 'path': 'dev-*.parquet'}, {'split': 'test', 'path': 'test-*.parquet'}]
Examples of data_files in YAML:
data_files: data.csv
data_files: data/*.png
data_files:
- part0/*
- part1/*
data_files:
- split: train
path: train/*
- split: test
path: test/*
data_files:
- split: train
path:
- train/part1/*
- train/part2/*
- split: test
path: test/*
PS: some symbols like dashes '-' are not allowed in split names
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 611, in get_module
metadata_configs = MetadataConfigs.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/metadata.py", line 153, in from_dataset_card_data
cls._raise_if_data_files_field_not_valid(metadata_config)
File "/usr/local/lib/python3.12/site-packages/datasets/utils/metadata.py", line 100, in _raise_if_data_files_field_not_valid
raise ValueError(yaml_error_message)
ValueError:
Expected data_files in YAML to be either a string or a list of strings
or a list of dicts with two keys: 'split' and 'path', but got [{'split': 'mlp-train', 'path': 'mlp-train-*.parquet'}, {'split': 'full-train', 'path': 'full-train-*.parquet'}, {'split': 'sft-train', 'path': 'sft-train-*.parquet'}, {'split': 'dev', 'path': 'dev-*.parquet'}, {'split': 'test', 'path': 'test-*.parquet'}]
Examples of data_files in YAML:
data_files: data.csv
data_files: data/*.png
data_files:
- part0/*
- part1/*
data_files:
- split: train
path: train/*
- split: test
path: test/*
data_files:
- split: train
path:
- train/part1/*
- train/part2/*
- split: test
path: test/*
PS: some symbols like dashes '-' are not allowed in split namesNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LaTeX OCR Dataset
Dataset for training LaTeX OCR models — converting images of mathematical formulas to LaTeX source. Built with a 3-stage curriculum training pipeline.
Sources
- linxy/LaTeX_OCR —
full,synthetic_handwrite,human_handwrite - OleehyO/latex-formulas —
cleaned_formulas
Splits
| Split | Samples | Shards | Size | Description |
|---|---|---|---|---|
| mlp-train | 574,490 | 7 | 3.1 GB | Stage 1 — light augmentation (blur, rotation, resolution resample) |
| full-train | 127,108 | 3 | 1.1 GB | Stage 2 — heavy augmentation (JPEG, noise, perspective, erase) |
| sft-train | 2,416 | 1 | 20 MB | Stage 3 — handwriting data for SFT |
| dev | 16,950 | 1 | 76 MB | Validation — raw images, no augmentation |
| test | 16,557 | 1 | 75 MB | Test — raw images, no augmentation |
Format
Each row contains:
| Column | Type | Description |
|---|---|---|
index |
int64 | Global sample index |
image |
Image | Formula image (JPEG, arbitrary resolution) |
label |
string | LaTeX formula string |
Label Statistics
| Split | Mean tokens | Median tokens | P95 tokens | Vocab size |
|---|---|---|---|---|
| mlp-train | 65.0 | 55 | 147 | 1,281 |
| full-train | 60.5 | 53 | 128 | 821 |
| sft-train | 55.5 | 46 | 126 | 372 |
| dev | 62.8 | 54 | 137 | 602 |
| test | 62.6 | 54 | 136 | 601 |
Token count uses the regex \\[a-zA-Z]+|[^\s]. All samples are filtered to [2, 200] tokens.
Augmentation Pipeline
Stage 1 — Light (mlp-train)
- Inception crop, multi-scale resolution resample (192–768px long side)
- Gaussian blur (p=0.3), rotation ±3° (p=0.3)
- Token drop (patch masking, decaying probability)
Stage 2 — Heavy (full-train)
- Inception crop, multi-scale resolution resample
- JPEG compression quality 30–75 (p=0.4), Gaussian noise (p=0.3), perspective distortion (p=0.3)
- Token drop
Stage 3 — SFT (sft-train)
- Handwriting data (synthetic + human) with heavy augmentation
- Replay subset from stage 2 to prevent catastrophic forgetting
Dev / Test
- Raw images, no augmentation
Usage
from datasets import load_dataset
ds = load_dataset("<repo_id>")
# Stage 1 training
for sample in ds["mlp-train"]:
image = sample["image"] # PIL.Image
label = sample["label"] # str, e.g. "\\frac{1}{2}"
index = sample["index"] # int
# Evaluation
for sample in ds["dev"]:
...
Training Notes
- Images have arbitrary resolution — designed for NaViT-style models that handle variable-size inputs natively
resize_image()should be called in the dataloader if using a fixed-resolution encoder- Tokenizer:
Qwen/Qwen2.5-Coder-1.5B— verified full coverage, no UNK tokens - Recommended training order:
mlp-train→full-train→sft-train
- Downloads last month
- 51