|
|
--- |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- image-to-text |
|
|
tags: |
|
|
- ocr |
|
|
- formula-recognition |
|
|
- text-recognition |
|
|
- document-parsing |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
<h1> UniRec40M: Unified Text and Formula Recognition Dataset </h1> |
|
|
|
|
|
[**Paper**](https://huggingface.co/papers/2512.21095) | [**Code**](https://github.com/Topdu/OpenOCR) | [**Demo**](https://huggingface.co/spaces/topdu/OpenOCR-UniRec-Demo) |
|
|
|
|
|
</div> |
|
|
|
|
|
**UniRec40M** is a large-scale dataset comprising 40 million samples of text, formulas, and mixed content. It was introduced in the paper "[UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters](https://huggingface.co/papers/2512.21095)" to enable the training of lightweight yet powerful models for document parsing. |
|
|
|
|
|
The dataset covers multiple levels of recognition, including characters, words, lines, paragraphs, and full documents. It specifically addresses challenges like structural variability and semantic entanglement between text and mathematical formulas. |
|
|
|
|
|
## Features |
|
|
|
|
|
- **Large Scale**: 40 million high-quality samples. |
|
|
- **Unified Recognition**: Supports plain text (words, lines, paragraphs), formulas (single-line, multi-line), and mixed content. |
|
|
- **Bilingual Support**: Comprehensive coverage of Chinese and English documents. |
|
|
- **Multi-domain**: Samples drawn from diverse document types and domains. |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
You can use the associated `openocr-python` package for inference with models trained on this data: |
|
|
|
|
|
```python |
|
|
from openocr import OpenOCR |
|
|
|
|
|
# Initialize the engine (using ONNX as an example) |
|
|
onnx_engine = OpenOCR(backend='onnx', device='cpu') |
|
|
|
|
|
# Path to your image |
|
|
img_path = '/path/to/your/image.png' |
|
|
|
|
|
# Perform recognition |
|
|
result, elapse = onnx_engine(img_path) |
|
|
print(result) |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find this dataset or the UniRec-0.1B model useful for your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{du2025unirec, |
|
|
title={UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters}, |
|
|
author={Yongkun Du and Zhineng Chen and Yazhen Xie and Weikang Bai and Hao Feng and Wei Shi and Yuchen Su and Can Huang and Yu-Gang Jiang}, |
|
|
journal={arXiv preprint arXiv:2512.21095}, |
|
|
year={2025} |
|
|
} |
|
|
|
|
|
@inproceedings{Du2025SVTRv2, |
|
|
title={SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition}, |
|
|
author={Yongkun Du and Zhineng Chen and Hongtao Xie and Caiyan Jia and Yu-Gang Jiang}, |
|
|
booktitle={ICCV}, |
|
|
year={2025}, |
|
|
pages={20147-20156} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Acknowledgement |
|
|
|
|
|
This project is maintained by the OCR team from the [FVL Laboratory](https://fvl.fudan.edu.cn), Fudan University. The codebase is built upon [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR), [PytorchOCR](https://github.com/WenmuZhou/PytorchOCR), and [MMOCR](https://github.com/open-mmlab/mmocr). |