Add paper link, task categories, and metadata
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,290 +1,66 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
-
<
|
| 6 |
|
| 7 |
-
<
|
| 8 |
-
<a href='https://arxiv.org/abs/2411.15858'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
|
| 9 |
-
<a href="https://huggingface.co/spaces/topdu/OpenOCR-Demo" target="_blank"><img src="https://img.shields.io/badge/%F0%9F%A4%97-Hugging Face Demo-blue"></a>
|
| 10 |
-
<a href="https://modelscope.cn/studios/topdktu/OpenOCR-Demo" target="_blank"><img src="https://img.shields.io/badge/魔搭-Demo-blue"></a>
|
| 11 |
-
<a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg"></a>
|
| 12 |
-
<a href="https://github.com/Topdu/OpenOCR/graphs/contributors"><img src="https://img.shields.io/github/contributors/Topdu/OpenOCR?color=9ea"></a>
|
| 13 |
-
<a href="https://pepy.tech/project/openocr"><img src="https://static.pepy.tech/personalized-badge/openocr?period=total&units=abbreviation&left_color=grey&right_color=blue&left_text=Clone%20downloads"></a>
|
| 14 |
-
<a href="https://github.com/Topdu/OpenOCR/stargazers"><img src="https://img.shields.io/github/stars/Topdu/OpenOCR?color=ccf"></a>
|
| 15 |
-
<a href="https://pypi.org/project/openocr-python/"><img alt="PyPI" src="https://img.shields.io/pypi/v/openocr-python"><img src="https://img.shields.io/pypi/dm/openocr-python?label=PyPI%20downloads"></a>
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
</div>
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
We sincerely welcome the researcher to recommend OCR or relevant algorithms and point out any potential factual errors or bugs. Upon receiving the suggestions, we will promptly evaluate and critically reproduce them. We look forward to collaborating with you to advance the development of OpenOCR and continuously contribute to the OCR community!
|
| 26 |
|
| 27 |
## Features
|
| 28 |
|
| 29 |
-
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
- An ultra-lightweight document parsing system with only 0.1B parameters
|
| 34 |
-
- Two-stage pipeline:
|
| 35 |
-
1. Layout analysis via **[PP-DocLayoutV2](https://www.paddleocr.ai/latest/version3.x/module_usage/layout_analysis.html)**
|
| 36 |
-
2. Unified recognition of text, formulas, and tables using the in-house model **[UniRec-0.1B](./docs/unirec.md)**
|
| 37 |
-
- In the original version of **UniRec-0.1B**, only **text and formula recognition** were supported. In **OpenDoc-0.1B**, we **rebuilt UniRec-0.1B** to enable **unified recognition of text, formulas, and tables**.
|
| 38 |
-
- Supports document parsing for **Chinese and English**
|
| 39 |
-
- Achieves **90.57% on [OmniDocBench (v1.5)](https://github.com/opendatalab/OmniDocBench/tree/main?tab=readme-ov-file#end-to-end-evaluation)**, outperforming many document parsing models based on multimodal large language models
|
| 40 |
-
|
| 41 |
-
- 🔥**UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters**
|
| 42 |
-
|
| 43 |
-
- ⚡\[[Doc](./docs/unirec.md)\] \[[ModelScope Model](https://www.modelscope.cn/models/topdktu/unirec-0.1b)\] \[[Hugging Face Model](https://huggingface.co/topdu/unirec-0.1b)\] \[[ModelScope Demo](https://www.modelscope.cn/studios/topdktu/OpenOCR-UniRec-Demo)\] \[[Hugging Face Demo](https://huggingface.co/spaces/topdu/OpenOCR-UniRec-Demo)\] \[[Local Demo](./docs/unirec.md#local-demo)\] \[[Paper](https://arxiv.org/pdf/2512.21095)\]
|
| 44 |
-
- Recognizing plain text (words, lines, paragraphs), formulas (single-line, multi-line), and mixed text-and-formulas content.
|
| 45 |
-
- 0.1B parameters.
|
| 46 |
-
- Trained from scratch on 40M data without pre-training.
|
| 47 |
-
- Supporting both Chinese and English text/formulas recognition.
|
| 48 |
-
|
| 49 |
-
- 🔥**OpenOCR: A general OCR system with accuracy and efficiency**
|
| 50 |
-
|
| 51 |
-
- ⚡\[[Quick Start](#quick-start)\] \[[Model](https://github.com/Topdu/OpenOCR/releases/tag/develop0.0.1)\] \[[ModelScope Demo](https://modelscope.cn/studios/topdktu/OpenOCR-Demo)\] \[[Hugging Face Demo](https://huggingface.co/spaces/topdu/OpenOCR-Demo)\] \[[Local Demo](#local-demo)\] \[[PaddleOCR Implementation](https://paddlepaddle.github.io/PaddleOCR/latest/algorithm/text_recognition/algorithm_rec_svtrv2.html)\]
|
| 52 |
-
- [Introduction](./docs/openocr.md)
|
| 53 |
-
- A practical OCR system building on SVTRv2.
|
| 54 |
-
- Outperforms [PP-OCRv4](https://paddlepaddle.github.io/PaddleOCR/latest/ppocr/model_list.html) baseline by 4.5% on the [OCR competition leaderboard](https://aistudio.baidu.com/competition/detail/1131/0/leaderboard) in terms of accuracy, while preserving quite similar inference speed.
|
| 55 |
-
- [x] Supports Chinese and English text detection and recognition.
|
| 56 |
-
- [x] Provides server model and mobile model.
|
| 57 |
-
- [x] Fine-tunes OpenOCR on a custom dataset: [Fine-tuning Det](./docs/finetune_det.md), [Fine-tuning Rec](./docs/finetune_rec.md).
|
| 58 |
-
- [x] [ONNX model export for wider compatibility](#export-onnx-model).
|
| 59 |
-
|
| 60 |
-
- 🔥**SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition (ICCV 2025)**
|
| 61 |
-
|
| 62 |
-
- \[[Paper](https://arxiv.org/abs/2411.15858)\] \[[Doc](./configs/rec/svtrv2/)\] \[[Model](./configs/rec/svtrv2/readme.md#11-models-and-results)\] \[[Datasets](./docs/svtrv2.md#downloading-datasets)\] \[[Config, Training and Inference](./configs/rec/svtrv2/readme.md#3-model-training--evaluation)\] \[[Benchmark](./docs/svtrv2.md#results-benchmark--configs--checkpoints)\]
|
| 63 |
-
- [Introduction](./docs/svtrv2.md)
|
| 64 |
-
- A unified training and evaluation benchmark (on top of [Union14M](https://github.com/Mountchicken/Union14M?tab=readme-ov-file#3-union14m-dataset)) for Scene Text Recognition
|
| 65 |
-
- Supports 24 Scene Text Recognition methods trained from scratch on the large-scale real dataset [Union14M-L-Filter](./docs/svtrv2.md#dataset-details), and will continue to add the latest methods.
|
| 66 |
-
- Improves accuracy by 20-30% compared to models trained based on synthetic datasets.
|
| 67 |
-
- Towards Arbitrary-Shaped Text Recognition and Language modeling with a Single Visual Model.
|
| 68 |
-
- Surpasses Attention-based Encoder-Decoder Methods across challenging scenarios in terms of accuracy and speed
|
| 69 |
-
- [Get Started](./docs/svtrv2.md#get-started-with-training-a-sota-scene-text-recognition-model-from-scratch) with training a SOTA Scene Text Recognition model from scratch.
|
| 70 |
-
|
| 71 |
-
## Ours OCR algorithms
|
| 72 |
-
|
| 73 |
-
- [**UniRec-0.1B**](./configs/rec/unirec/) (*Yongkun Du, Zhineng Chen, Yazhen Xie, Weikang Bai, Hao Feng, Wei Shi, Yuchen Su, Can Huang, Yu-Gang Jiang. UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters,* Preprint. [Doc](./configs/rec/unirec/), [Paper](https://arxiv.org/pdf/2512.21095))
|
| 74 |
-
- [**MDiff4STR**](./configs/rec/mdiff4str/) (*Yongkun Du, Miaomiao Zhao, Songlin Fan, Zhineng Chen\*, Caiyan Jia, Yu-Gang Jiang. MDiff4STR: Mask Diffusion Model for Scene Text Recognition,* AAAI 2026 Oral. [Doc](./configs/rec/mdiff4str/), [Paper](https://arxiv.org/abs/2512.01422))
|
| 75 |
-
- **CMER** (*Weikang Bai, Yongkun Du, Yuchen Su, Yazhen Xie, Zhineng Chen\*. Complex Mathematical Expression Recognition: Benchmark, Large-Scale Dataset and Strong Baseline,* AAAI 2026. [Paper](https://arxiv.org/abs/2512.13731), Code is coming soon.)
|
| 76 |
-
- **TextSSR** (*Xingsong Ye, Yongkun Du, Yunbo Tao, Zhineng Chen\*. TextSSR: Diffusion-based Data Synthesis for Scene Text Recognition,* ICCV 2025. [Paper](https://openaccess.thecvf.com/content/ICCV2025/papers/Ye_TextSSR_Diffusion-based_Data_Synthesis_for_Scene_Text_Recognition_ICCV_2025_paper.pdf), [Code](https://github.com/YesianRohn/TextSSR))
|
| 77 |
-
- [**SVTRv2**](./configs/rec/svtrv2) (*Yongkun Du, Zhineng Chen\*, Hongtao Xie, Caiyan Jia, Yu-Gang Jiang. SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition,* ICCV 2025. [Doc](./configs/rec/svtrv2/), [Paper](https://arxiv.org/abs/2411.15858))
|
| 78 |
-
- [**IGTR**](./configs/rec/igtr/) (*Yongkun Du, Zhineng Chen\*, Yuchen Su, Caiyan Jia, Yu-Gang Jiang. Instruction-Guided Scene Text Recognition,* TPAMI 2025. [Doc](./configs/rec/igtr), [Paper](https://ieeexplore.ieee.org/document/10820836))
|
| 79 |
-
- [**CPPD**](./configs/rec/cppd/) (*Yongkun Du, Zhineng Chen\*, Caiyan Jia, Xiaoting Yin, Chenxia Li, Yuning Du, Yu-Gang Jiang. Context Perception Parallel Decoder for Scene Text Recognition,* TPAMI 2025. [PaddleOCR Doc](https://github.com/PaddlePaddle/PaddleOCR/blob/main/docs/algorithm/text_recognition/algorithm_rec_cppd.en.md), [Paper](https://ieeexplore.ieee.org/document/10902187))
|
| 80 |
-
- [**SMTR&FocalSVTR**](./configs/rec/smtr/) (*Yongkun Du, Zhineng Chen\*, Caiyan Jia, Xieping Gao, Yu-Gang Jiang. Out of Length Text Recognition with Sub-String Matching,* AAAI 2025. [Doc](./configs/rec/smtr/), [Paper](https://ojs.aaai.org/index.php/AAAI/article/view/32285))
|
| 81 |
-
- [**DPTR**](./configs/rec/dptr/) (*Shuai Zhao, Yongkun Du, Zhineng Chen\*, Yu-Gang Jiang. Decoder Pre-Training with only Text for Scene Text Recognition,* ACM MM 2024. [Paper](https://dl.acm.org/doi/10.1145/3664647.3681390))
|
| 82 |
-
- [**CDistNet**](./configs/rec/cdistnet/) (*Tianlun Zheng, Zhineng Chen\*, Shancheng Fang, Hongtao Xie, Yu-Gang Jiang. CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition,* IJCV 2024. [Paper](https://link.springer.com/article/10.1007/s11263-023-01880-0))
|
| 83 |
-
- **MRN** (*Tianlun Zheng, Zhineng Chen\*, Bingchen Huang, Wei Zhang, Yu-Gang Jiang. MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition,* ICCV 2023. [Paper](https://openaccess.thecvf.com/content/ICCV2023/html/Zheng_MRN_Multiplexed_Routing_Network_for_Incremental_Multilingual_Text_Recognition_ICCV_2023_paper.html), [Code](https://github.com/simplify23/MRN))
|
| 84 |
-
- **TPS++** (*Tianlun Zheng, Zhineng Chen\*, Jinfeng Bai, Hongtao Xie, Yu-Gang Jiang. TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition,* IJCAI 2023. [Paper](https://arxiv.org/abs/2305.05322), [Code](https://github.com/simplify23/TPS_PP))
|
| 85 |
-
- [**SVTR**](./configs/rec/svtr/) (*Yongkun Du, Zhineng Chen\*, Caiyan Jia, Xiaoting Yin, Tianlun Zheng, Chenxia Li, Yuning Du, Yu-Gang Jiang. SVTR: Scene Text Recognition with a Single Visual Model,* IJCAI 2022 (Long). [PaddleOCR Doc](https://github.com/Topdu/PaddleOCR/blob/main/doc/doc_ch/algorithm_rec_svtr.md), [Paper](https://www.ijcai.org/proceedings/2022/124))
|
| 86 |
-
- [**NRTR**](./configs/rec/nrtr/) (*Fenfen Sheng, Zhineng Chen, Bo Xu. NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition,* ICDAR 2019. [Paper](https://arxiv.org/abs/1806.00926))
|
| 87 |
-
|
| 88 |
-
## Recent Updates
|
| 89 |
-
|
| 90 |
-
- **2025.12.25**: 🔥 Releasing [OpenDoc-0.1B](./docs/opendoc.md): Ultra-Lightweight Document Parsing System with 0.1B Parameters
|
| 91 |
-
- **2025.11.08**: Our paper [MDiff4STR](https://arxiv.org/abs/2512.01422) is accepted by AAAI 2026 (Oral). Accessible in [Doc](./configs/rec/mdiff4str/).
|
| 92 |
-
- **2025.11.08**: Our paper [CMER](https://arxiv.org/abs/2512.13731) is accepted by AAAI 2026. Code is coming soon.
|
| 93 |
-
- **2025.08.20**: 🔥 Releasing [UniRec-0.1B](https://arxiv.org/pdf/2512.21095): Unified Text and Formula Recognition with 0.1B Parameters
|
| 94 |
-
- **2025.07.10**: Our paper [SVTRv2](https://arxiv.org/abs/2411.15858) is accepted by ICCV 2025. Accessible in [Doc](./configs/rec/svtrv2/).
|
| 95 |
-
- **2025.07.10**: Our paper [TextSSR](https://openaccess.thecvf.com/content/ICCV2025/papers/Ye_TextSSR_Diffusion-based_Data_Synthesis_for_Scene_Text_Recognition_ICCV_2025_paper.pdf) is accepted by ICCV 2025. Accessible in [Code](https://github.com/YesianRohn/TextSSR).
|
| 96 |
-
- **2025.03.24**: 🔥 Releasing the feature of fine-tuning OpenOCR on a custom dataset: [Fine-tuning Det](./docs/finetune_det.md), [Fine-tuning Rec](./docs/finetune_rec.md)
|
| 97 |
-
- **2025.03.23**: 🔥 Releasing the feature of [ONNX model export for wider compatibility](#export-onnx-model).
|
| 98 |
-
- **2025.02.22**: Our paper [CPPD](https://ieeexplore.ieee.org/document/10902187) is accepted by TPAMI. Accessible in [Doc](./configs/rec/cppd/) and [PaddleOCR Doc](https://github.com/PaddlePaddle/PaddleOCR/blob/main/docs/algorithm/text_recognition/algorithm_rec_cppd.en.md).
|
| 99 |
-
- **2024.12.31**: Our paper [IGTR](https://ieeexplore.ieee.org/document/10820836) is accepted by TPAMI. Accessible in [Doc](./configs/rec/igtr/).
|
| 100 |
-
- **2024.12.16**: Our paper [SMTR](https://ojs.aaai.org/index.php/AAAI/article/view/32285) is accepted by AAAI 2025. Accessible in [Doc](./configs/rec/smtr/).
|
| 101 |
-
- **2024.12.03**: The pre-training code for [DPTR](https://dl.acm.org/doi/10.1145/3664647.3681390) is merged.
|
| 102 |
-
- **🔥 2024.11.23 release notes**:
|
| 103 |
-
- **OpenOCR: A general OCR system with accuracy and efficiency**
|
| 104 |
-
- ⚡\[[Quick Start](#quick-start)\] \[[Model](https://github.com/Topdu/OpenOCR/releases/tag/develop0.0.1)\] \[[ModelScope Demo](https://modelscope.cn/studios/topdktu/OpenOCR-Demo)\] \[[Hugging Face Demo](https://huggingface.co/spaces/topdu/OpenOCR-Demo)\] \[[Local Demo](#local-demo)\] \[[PaddleOCR Implementation](https://paddlepaddle.github.io/PaddleOCR/latest/algorithm/text_recognition/algorithm_rec_svtrv2.html)\]
|
| 105 |
-
- [Introduction](./docs/openocr.md)
|
| 106 |
-
- **SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition**
|
| 107 |
-
- \[[Paper](https://arxiv.org/abs/2411.15858)\] \[[Doc](./configs/rec/svtrv2/)\] \[[Model](./configs/rec/svtrv2/readme.md#11-models-and-results)\] \[[Datasets](./docs/svtrv2.md#downloading-datasets)\] \[[Config, Training and Inference](./configs/rec/svtrv2/readme.md#3-model-training--evaluation)\] \[[Benchmark](./docs/svtrv2.md#results--configs--checkpoints)\]
|
| 108 |
-
- [Introduction](./docs/svtrv2.md)
|
| 109 |
-
- [Get Started](./docs/svtrv2.md#get-started-with-training-a-sota-scene-text-recognition-model-from-scratch) with training a SOTA Scene Text Recognition model from scratch.
|
| 110 |
|
| 111 |
## Quick Start
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
### 1. ONNX Inference
|
| 116 |
-
|
| 117 |
-
#### Install OpenOCR and Dependencies:
|
| 118 |
-
|
| 119 |
-
```shell
|
| 120 |
-
pip install openocr-python
|
| 121 |
-
pip install onnxruntime
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
#### Usage:
|
| 125 |
|
| 126 |
```python
|
| 127 |
from openocr import OpenOCR
|
| 128 |
-
onnx_engine = OpenOCR(backend='onnx', device='cpu')
|
| 129 |
-
img_path = '/path/img_path or /path/img_file'
|
| 130 |
-
result, elapse = onnx_engine(img_path)
|
| 131 |
-
```
|
| 132 |
-
|
| 133 |
-
### 2. Pytorch inference
|
| 134 |
-
|
| 135 |
-
#### Dependencies:
|
| 136 |
-
|
| 137 |
-
- [PyTorch](http://pytorch.org/) version >= 1.13.0
|
| 138 |
-
- Python version >= 3.7
|
| 139 |
-
|
| 140 |
-
```shell
|
| 141 |
-
conda create -n openocr python==3.8
|
| 142 |
-
conda activate openocr
|
| 143 |
-
# install gpu version torch
|
| 144 |
-
conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 -c pytorch -c nvidia
|
| 145 |
-
# or cpu version
|
| 146 |
-
conda install pytorch torchvision torchaudio cpuonly -c pytorch
|
| 147 |
-
```
|
| 148 |
-
|
| 149 |
-
After installing dependencies, the following two installation methods are available. Either one can be chosen.
|
| 150 |
-
|
| 151 |
-
#### 2.1. Python Modules
|
| 152 |
-
|
| 153 |
-
**Install OpenOCR**:
|
| 154 |
-
|
| 155 |
-
```shell
|
| 156 |
-
pip install openocr-python
|
| 157 |
-
```
|
| 158 |
-
|
| 159 |
-
**Usage**:
|
| 160 |
-
|
| 161 |
-
```python
|
| 162 |
-
from openocr import OpenOCR
|
| 163 |
-
engine = OpenOCR()
|
| 164 |
-
img_path = '/path/img_path or /path/img_file'
|
| 165 |
-
result, elapse = engine(img_path)
|
| 166 |
-
|
| 167 |
-
# Server mode
|
| 168 |
-
# engine = OpenOCR(mode='server')
|
| 169 |
-
```
|
| 170 |
-
|
| 171 |
-
#### 2.2. Clone this repository:
|
| 172 |
-
|
| 173 |
-
```shell
|
| 174 |
-
git clone https://github.com/Topdu/OpenOCR.git
|
| 175 |
-
cd OpenOCR
|
| 176 |
-
pip install -r requirements.txt
|
| 177 |
-
wget https://github.com/Topdu/OpenOCR/releases/download/develop0.0.1/openocr_det_repvit_ch.pth
|
| 178 |
-
wget https://github.com/Topdu/OpenOCR/releases/download/develop0.0.1/openocr_repsvtr_ch.pth
|
| 179 |
-
# Rec Server model
|
| 180 |
-
# wget https://github.com/Topdu/OpenOCR/releases/download/develop0.0.1/openocr_svtrv2_ch.pth
|
| 181 |
-
```
|
| 182 |
-
|
| 183 |
-
**Usage**:
|
| 184 |
-
|
| 185 |
-
```shell
|
| 186 |
-
# OpenOCR system: Det + Rec model
|
| 187 |
-
python tools/infer_e2e.py --img_path=/path/img_fold or /path/img_file
|
| 188 |
-
# Det model
|
| 189 |
-
python tools/infer_det.py --c ./configs/det/dbnet/repvit_db.yml --o Global.infer_img=/path/img_fold or /path/img_file
|
| 190 |
-
# Rec model
|
| 191 |
-
python tools/infer_rec.py --c ./configs/rec/svtrv2/repsvtr_ch.yml --o Global.infer_img=/path/img_fold or /path/img_file
|
| 192 |
-
```
|
| 193 |
-
|
| 194 |
-
##### Export ONNX model
|
| 195 |
-
|
| 196 |
-
```shell
|
| 197 |
-
pip install onnx
|
| 198 |
-
python tools/toonnx.py --c configs/rec/svtrv2/repsvtr_ch.yml --o Global.device=cpu
|
| 199 |
-
python tools/toonnx.py --c configs/det/dbnet/repvit_db.yml --o Global.device=cpu
|
| 200 |
-
```
|
| 201 |
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
```shell
|
| 205 |
-
pip install onnxruntime
|
| 206 |
-
# OpenOCR system: Det + Rec model
|
| 207 |
-
python tools/infer_e2e.py --img_path=/path/img_fold or /path/img_file --backend=onnx --device=cpu
|
| 208 |
-
# Det model
|
| 209 |
-
python tools/infer_det.py --c ./configs/det/dbnet/repvit_db.yml --o Global.backend=onnx Global.device=cpu Global.infer_img=/path/img_fold or /path/img_file
|
| 210 |
-
# Rec model
|
| 211 |
-
python tools/infer_rec.py --c ./configs/rec/svtrv2/repsvtr_ch.yml --o Global.backend=onnx Global.device=cpu Global.infer_img=/path/img_fold or /path/img_file
|
| 212 |
-
```
|
| 213 |
|
| 214 |
-
|
|
|
|
| 215 |
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
tar xf OCR_e2e_img.tar
|
| 220 |
-
# start demo
|
| 221 |
-
python demo_gradio.py
|
| 222 |
```
|
| 223 |
|
| 224 |
-
## Reproduction schedule:
|
| 225 |
-
|
| 226 |
-
### Scene Text Recognition
|
| 227 |
-
|
| 228 |
-
| Method | Venue | Training | Evaluation | Contributor |
|
| 229 |
-
| --------------------------------------------- | ---------------------------------------------------------------------------------------------- | -------- | ---------- | ------------------------------------------- |
|
| 230 |
-
| [CRNN](./configs/rec/svtrs/) | [TPAMI 2016](https://arxiv.org/abs/1507.05717) | ✅ | ✅ | |
|
| 231 |
-
| [ASTER](./configs/rec/aster/) | [TPAMI 2019](https://ieeexplore.ieee.org/document/8395027) | ✅ | ✅ | [pretto0](https://github.com/pretto0) |
|
| 232 |
-
| [NRTR](./configs/rec/nrtr/) | [ICDAR 2019](https://arxiv.org/abs/1806.00926) | ✅ | ✅ | |
|
| 233 |
-
| [SAR](./configs/rec/sar/) | [AAAI 2019](https://aaai.org/papers/08610-show-attend-and-read-a-simple-and-strong-baseline-for-irregular-text-recognition/) | ✅ | ✅ | [pretto0](https://github.com/pretto0) |
|
| 234 |
-
| [MORAN](./configs/rec/moran/) | [PR 2019](https://www.sciencedirect.com/science/article/abs/pii/S0031320319300263) | ✅ | ✅ | |
|
| 235 |
-
| [DAN](./configs/rec/dan/) | [AAAI 2020](https://arxiv.org/pdf/1912.10205) | ✅ | ✅ | |
|
| 236 |
-
| [RobustScanner](./configs/rec/robustscanner/) | [ECCV 2020](https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/3160_ECCV_2020_paper.php) | ✅ | ✅ | [pretto0](https://github.com/pretto0) |
|
| 237 |
-
| [AutoSTR](./configs/rec/autostr/) | [ECCV 2020](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123690732.pdf) | ✅ | ✅ | |
|
| 238 |
-
| [SRN](./configs/rec/srn/) | [CVPR 2020](https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Towards_Accurate_Scene_Text_Recognition_With_Semantic_Reasoning_Networks_CVPR_2020_paper.html) | ✅ | ✅ | [pretto0](https://github.com/pretto0) |
|
| 239 |
-
| [SEED](./configs/rec/seed/) | [CVPR 2020](https://openaccess.thecvf.com/content_CVPR_2020/html/Qiao_SEED_Semantics_Enhanced_Encoder-Decoder_Framework_for_Scene_Text_Recognition_CVPR_2020_paper.html) | ✅ | ✅ | |
|
| 240 |
-
| [ABINet](./configs/rec/abinet/) | [CVPR 2021](https://openaccess.thecvf.com//content/CVPR2021/html/Fang_Read_Like_Humans_Autonomous_Bidirectional_and_Iterative_Language_Modeling_for_CVPR_2021_paper.html) | ✅ | ✅ | [YesianRohn](https://github.com/YesianRohn) |
|
| 241 |
-
| [VisionLAN](./configs/rec/visionlan/) | [ICCV 2021](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_From_Two_to_One_A_New_Scene_Text_Recognizer_With_ICCV_2021_paper.html) | ✅ | ✅ | [YesianRohn](https://github.com/YesianRohn) |
|
| 242 |
-
| PIMNet | [ACM MM 2021](https://dl.acm.org/doi/10.1145/3474085.3475238) | | | TODO |
|
| 243 |
-
| [SVTR](./configs/rec/svtrs/) | [IJCAI 2022](https://www.ijcai.org/proceedings/2022/124) | ✅ | ✅ | |
|
| 244 |
-
| [PARSeq](./configs/rec/parseq/) | [ECCV 2022](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136880177.pdf) | ✅ | ✅ | |
|
| 245 |
-
| [MATRN](./configs/rec/matrn/) | [ECCV 2022](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136880442.pdf) | ✅ | ✅ | |
|
| 246 |
-
| [MGP-STR](./configs/rec/mgpstr/) | [ECCV 2022](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136880336.pdf) | ✅ | ✅ | |
|
| 247 |
-
| [LPV](./configs/rec/lpv/) | [IJCAI 2023](https://www.ijcai.org/proceedings/2023/0189.pdf) | ✅ | ✅ | |
|
| 248 |
-
| [MAERec](./configs/rec/maerec/)(Union14M) | [ICCV 2023](https://openaccess.thecvf.com/content/ICCV2023/papers/Jiang_Revisiting_Scene_Text_Recognition_A_Data_Perspective_ICCV_2023_paper.pdf) | ✅ | ✅ | |
|
| 249 |
-
| [LISTER](./configs/rec/lister/) | [ICCV 2023](https://openaccess.thecvf.com/content/ICCV2023/papers/Cheng_LISTER_Neighbor_Decoding_for_Length-Insensitive_Scene_Text_Recognition_ICCV_2023_paper.pdf) | ✅ | ✅ | |
|
| 250 |
-
| [CDistNet](./configs/rec/cdistnet/) | [IJCV 2024](https://link.springer.com/article/10.1007/s11263-023-01880-0) | ✅ | ✅ | [YesianRohn](https://github.com/YesianRohn) |
|
| 251 |
-
| [BUSNet](./configs/rec/busnet/) | [AAAI 2024](https://ojs.aaai.org/index.php/AAAI/article/view/28402) | ✅ | ✅ | |
|
| 252 |
-
| DCTC | [AAAI 2024](https://ojs.aaai.org/index.php/AAAI/article/view/28575) | | | TODO |
|
| 253 |
-
| [CAM](./configs/rec/cam/) | [PR 2024](https://arxiv.org/abs/2402.13643) | ✅ | ✅ | |
|
| 254 |
-
| [OTE](./configs/rec/ote/) | [CVPR 2024](https://openaccess.thecvf.com/content/CVPR2024/html/Xu_OTE_Exploring_Accurate_Scene_Text_Recognition_Using_One_Token_CVPR_2024_paper.html) | ✅ | ✅ | |
|
| 255 |
-
| CFF | [IJCAI 2024](https://arxiv.org/abs/2407.05562) | | | TODO |
|
| 256 |
-
| [DPTR](./configs/rec/dptr/) | [ACM MM 2024](https://dl.acm.org/doi/10.1145/3664647.3681390) | | | [fd-zs](https://github.com/fd-zs) |
|
| 257 |
-
| VIPTR | [ACM CIKM 2024](https://arxiv.org/abs/2401.10110) | | | TODO |
|
| 258 |
-
| [IGTR](./configs/rec/igtr/) | [TPAMI 2025](https://ieeexplore.ieee.org/document/10820836) | ✅ | ✅ | |
|
| 259 |
-
| [SMTR](./configs/rec/smtr/) | [AAAI 2025](https://ojs.aaai.org/index.php/AAAI/article/view/32285) | ✅ | ✅ | |
|
| 260 |
-
| [CPPD](./configs/rec/cppd/) | [TPAMI 2025](https://ieeexplore.ieee.org/document/10902187) | ✅ | ✅ | |
|
| 261 |
-
| [FocalSVTR-CTC](./configs/rec/svtrs/) | [AAAI 2025](https://ojs.aaai.org/index.php/AAAI/article/view/32285) | ✅ | ✅ | |
|
| 262 |
-
| [SVTRv2](./configs/rec/svtrv2/) | [ICCV 2025](https://arxiv.org/abs/2411.15858) | ✅ | ✅ | |
|
| 263 |
-
| [ResNet+Trans-CTC](./configs/rec/svtrs/) | | ✅ | ✅ | |
|
| 264 |
-
| [ViT-CTC](./configs/rec/svtrs/) | | ✅ | ✅ | |
|
| 265 |
-
| [MDiff4STR](./configs/rec/mdiff4str/) | [AAAI 2025 Oral](https://arxiv.org/abs/2512.01422) | ✅ | ✅ | |
|
| 266 |
-
|
| 267 |
-
#### Contributors
|
| 268 |
-
|
| 269 |
-
______________________________________________________________________
|
| 270 |
-
|
| 271 |
-
Yiming Lei ([pretto0](https://github.com/pretto0)), Xingsong Ye ([YesianRohn](https://github.com/YesianRohn)), and Shuai Zhao ([fd-zs](https://github.com/fd-zs)) from the [FVL Laboratory](https://fvl.fudan.edu.cn), Fudan University, with guidance from Dr. Zhineng Chen ([Homepage](https://zhinchenfd.github.io/)), completed the majority work of the algorithm reproduction. Grateful for their outstanding contributions.
|
| 272 |
-
|
| 273 |
-
### Scene Text Detection (STD)
|
| 274 |
-
|
| 275 |
-
TODO
|
| 276 |
-
|
| 277 |
-
### Text Spotting
|
| 278 |
-
|
| 279 |
-
TODO
|
| 280 |
-
|
| 281 |
-
______________________________________________________________________
|
| 282 |
-
|
| 283 |
## Citation
|
| 284 |
|
| 285 |
-
If you find
|
| 286 |
|
| 287 |
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 288 |
@inproceedings{Du2025SVTRv2,
|
| 289 |
title={SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition},
|
| 290 |
author={Yongkun Du and Zhineng Chen and Hongtao Xie and Caiyan Jia and Yu-Gang Jiang},
|
|
@@ -292,15 +68,8 @@ If you find our method useful for your reserach, please cite:
|
|
| 292 |
year={2025},
|
| 293 |
pages={20147-20156}
|
| 294 |
}
|
| 295 |
-
|
| 296 |
-
@article{du2025unirec,
|
| 297 |
-
title={UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters},
|
| 298 |
-
author={Yongkun Du and Zhineng Chen and Yazhen Xie and Weikang Bai and Hao Feng and Wei Shi and Yuchen Su and Can Huang and Yu-Gang Jiang},
|
| 299 |
-
journal={arXiv preprint arXiv:2512.21095},
|
| 300 |
-
year={2025}
|
| 301 |
-
}
|
| 302 |
```
|
| 303 |
|
| 304 |
-
|
| 305 |
|
| 306 |
-
This
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
license: apache-2.0
|
| 6 |
+
task_categories:
|
| 7 |
+
- image-to-text
|
| 8 |
+
tags:
|
| 9 |
+
- ocr
|
| 10 |
+
- formula-recognition
|
| 11 |
+
- text-recognition
|
| 12 |
+
- document-parsing
|
| 13 |
+
---
|
| 14 |
|
| 15 |
+
<div align="center">
|
| 16 |
|
| 17 |
+
<h1> UniRec40M: Unified Text and Formula Recognition Dataset </h1>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
[**Paper**](https://huggingface.co/papers/2512.21095) | [**Code**](https://github.com/Topdu/OpenOCR) | [**Demo**](https://huggingface.co/spaces/topdu/OpenOCR-UniRec-Demo)
|
| 20 |
|
| 21 |
</div>
|
| 22 |
|
| 23 |
+
**UniRec40M** is a large-scale dataset comprising 40 million samples of text, formulas, and mixed content. It was introduced in the paper "[UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters](https://huggingface.co/papers/2512.21095)" to enable the training of lightweight yet powerful models for document parsing.
|
| 24 |
|
| 25 |
+
The dataset covers multiple levels of recognition, including characters, words, lines, paragraphs, and full documents. It specifically addresses challenges like structural variability and semantic entanglement between text and mathematical formulas.
|
|
|
|
|
|
|
| 26 |
|
| 27 |
## Features
|
| 28 |
|
| 29 |
+
- **Large Scale**: 40 million high-quality samples.
|
| 30 |
+
- **Unified Recognition**: Supports plain text (words, lines, paragraphs), formulas (single-line, multi-line), and mixed content.
|
| 31 |
+
- **Bilingual Support**: Comprehensive coverage of Chinese and English documents.
|
| 32 |
+
- **Multi-domain**: Samples drawn from diverse document types and domains.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
## Quick Start
|
| 35 |
|
| 36 |
+
You can use the associated `openocr-python` package for inference with models trained on this data:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
```python
|
| 39 |
from openocr import OpenOCR
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
+
# Initialize the engine (using ONNX as an example)
|
| 42 |
+
onnx_engine = OpenOCR(backend='onnx', device='cpu')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
+
# Path to your image
|
| 45 |
+
img_path = '/path/to/your/image.png'
|
| 46 |
|
| 47 |
+
# Perform recognition
|
| 48 |
+
result, elapse = onnx_engine(img_path)
|
| 49 |
+
print(result)
|
|
|
|
|
|
|
|
|
|
| 50 |
```
|
| 51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
## Citation
|
| 53 |
|
| 54 |
+
If you find this dataset or the UniRec-0.1B model useful for your research, please cite:
|
| 55 |
|
| 56 |
```bibtex
|
| 57 |
+
@article{du2025unirec,
|
| 58 |
+
title={UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters},
|
| 59 |
+
author={Yongkun Du and Zhineng Chen and Yazhen Xie and Weikang Bai and Hao Feng and Wei Shi and Yuchen Su and Can Huang and Yu-Gang Jiang},
|
| 60 |
+
journal={arXiv preprint arXiv:2512.21095},
|
| 61 |
+
year={2025}
|
| 62 |
+
}
|
| 63 |
+
|
| 64 |
@inproceedings{Du2025SVTRv2,
|
| 65 |
title={SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition},
|
| 66 |
author={Yongkun Du and Zhineng Chen and Hongtao Xie and Caiyan Jia and Yu-Gang Jiang},
|
|
|
|
| 68 |
year={2025},
|
| 69 |
pages={20147-20156}
|
| 70 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
```
|
| 72 |
|
| 73 |
+
## Acknowledgement
|
| 74 |
|
| 75 |
+
This project is maintained by the OCR team from the [FVL Laboratory](https://fvl.fudan.edu.cn), Fudan University. The codebase is built upon [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR), [PytorchOCR](https://github.com/WenmuZhou/PytorchOCR), and [MMOCR](https://github.com/open-mmlab/mmocr).
|