Datasets:

ArXiv:
OpenOCR / README.md
nielsr's picture
nielsr HF Staff
Add paper link, task categories, and metadata
6bf8b92 verified
|
raw
history blame
2.77 kB
metadata
language:
  - en
  - zh
license: apache-2.0
task_categories:
  - image-to-text
tags:
  - ocr
  - formula-recognition
  - text-recognition
  - document-parsing

UniRec40M: Unified Text and Formula Recognition Dataset

Paper | Code | Demo

UniRec40M is a large-scale dataset comprising 40 million samples of text, formulas, and mixed content. It was introduced in the paper "UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters" to enable the training of lightweight yet powerful models for document parsing.

The dataset covers multiple levels of recognition, including characters, words, lines, paragraphs, and full documents. It specifically addresses challenges like structural variability and semantic entanglement between text and mathematical formulas.

Features

  • Large Scale: 40 million high-quality samples.
  • Unified Recognition: Supports plain text (words, lines, paragraphs), formulas (single-line, multi-line), and mixed content.
  • Bilingual Support: Comprehensive coverage of Chinese and English documents.
  • Multi-domain: Samples drawn from diverse document types and domains.

Quick Start

You can use the associated openocr-python package for inference with models trained on this data:

from openocr import OpenOCR

# Initialize the engine (using ONNX as an example)
onnx_engine = OpenOCR(backend='onnx', device='cpu')

# Path to your image
img_path = '/path/to/your/image.png'

# Perform recognition
result, elapse = onnx_engine(img_path)
print(result)

Citation

If you find this dataset or the UniRec-0.1B model useful for your research, please cite:

@article{du2025unirec,
  title={UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters},
  author={Yongkun Du and Zhineng Chen and Yazhen Xie and Weikang Bai and Hao Feng and Wei Shi and Yuchen Su and Can Huang and Yu-Gang Jiang},
  journal={arXiv preprint arXiv:2512.21095},
  year={2025}
}

@inproceedings{Du2025SVTRv2,
  title={SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition},
  author={Yongkun Du and Zhineng Chen and Hongtao Xie and Caiyan Jia and Yu-Gang Jiang},
  booktitle={ICCV},
  year={2025},
  pages={20147-20156}
}

Acknowledgement

This project is maintained by the OCR team from the FVL Laboratory, Fudan University. The codebase is built upon PaddleOCR, PytorchOCR, and MMOCR.