Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,150 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: image-to-text
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters
|
| 7 |
+
|
| 8 |
+
[[Paper](https://huggingface.co/papers/2512.21095)] [[Code](https://github.com/Topdu/OpenOCR)] [[ModelScope Demo](https://www.modelscope.cn/studios/topdktu/OpenOCR-UniRec-Demo)] [[Hugging Face Demo](https://huggingface.co/spaces/topdu/OpenOCR-UniRec-Demo)] [[Local Demo](#local-demo)]
|
| 9 |
+
|
| 10 |
+
## Introduction
|
| 11 |
+
|
| 12 |
+
**UniRec-0.1B** is a unified recognition model with only 0.1B parameters, designed for high-accuracy and efficient recognition of plain text (words, lines, paragraphs), mathematical formulas (single-line, multi-line), and mixed content in both Chinese and English.
|
| 13 |
+
|
| 14 |
+
It addresses structural variability and semantic entanglement by using a hierarchical supervision training strategy and a semantic-decoupled tokenizer. Despite its small size, it achieves performance comparable to or better than much larger vision-language models.
|
| 15 |
+
|
| 16 |
+
## Get Started with the UniRec
|
| 17 |
+
|
| 18 |
+
### Dependencies:
|
| 19 |
+
|
| 20 |
+
- [PyTorch](http://pytorch.org/) version >= 1.13.0
|
| 21 |
+
- Python version >= 3.7
|
| 22 |
+
|
| 23 |
+
```shell
|
| 24 |
+
conda create -n openocr python==3.10
|
| 25 |
+
conda activate openocr
|
| 26 |
+
# install gpu version torch >=1.13.0
|
| 27 |
+
conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 -c pytorch -c nvidia
|
| 28 |
+
# or cpu version
|
| 29 |
+
conda install pytorch torchvision torchaudio cpuonly -c pytorch
|
| 30 |
+
git clone https://github.com/Topdu/OpenOCR.git
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### Downloding the UniRec Model from ModelScope or Hugging Face
|
| 34 |
+
|
| 35 |
+
```shell
|
| 36 |
+
cd OpenOCR
|
| 37 |
+
pip install -r requirements.txt
|
| 38 |
+
# download model from modelscope
|
| 39 |
+
modelscope download topdktu/unirec-0.1b --local_dir ./unirec-0.1b
|
| 40 |
+
# or download model from huggingface
|
| 41 |
+
huggingface-cli download topdu/unirec-0.1b --local-dir ./unirec-0.1b
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
### Inference
|
| 45 |
+
|
| 46 |
+
```shell
|
| 47 |
+
python tools/infer_rec.py --c ./configs/rec/unirec/focalsvtr_ardecoder_unirec.yml --o Global.infer_img=/path/img_fold or /path/img_file
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
### Local Demo
|
| 51 |
+
|
| 52 |
+
```shell
|
| 53 |
+
pip install gradio==4.20.0
|
| 54 |
+
python demo_unirec.py
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
### Training
|
| 58 |
+
|
| 59 |
+
Additional dependencies:
|
| 60 |
+
|
| 61 |
+
```shell
|
| 62 |
+
pip install PyMuPDF
|
| 63 |
+
pip install pdf2image
|
| 64 |
+
pip install numpy==1.26.4
|
| 65 |
+
pip install albumentations==1.4.24
|
| 66 |
+
pip install transformers==4.49.0
|
| 67 |
+
pip install -U flash-attn --no-build-isolation
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
It is recommended to organize your working directory as follows:
|
| 71 |
+
|
| 72 |
+
```shell
|
| 73 |
+
|-UniRec40M # Main directory for UniRec40M dataset
|
| 74 |
+
|-OpenOCR # Directory for OpenOCR-related files
|
| 75 |
+
|-evaluation # Directory for evaluation dataset
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
Download the UniRec40M dataset from Hugging Face
|
| 79 |
+
|
| 80 |
+
```shell
|
| 81 |
+
# downloading small data for quickly training
|
| 82 |
+
huggingface-cli download topdu/UniRec40M --include "hiertext_lmdb/**" --repo-type dataset --local-dir ./UniRec40M/
|
| 83 |
+
huggingface-cli download topdu/OpenOCR-Data --include "evaluation/**" --repo-type dataset --local-dir ./
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
Run the following command to train the model quickly:
|
| 87 |
+
|
| 88 |
+
```shell
|
| 89 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --master_port=23333 --nproc_per_node=8 tools/train_rec.py --c configs/rec/unirec/focalsvtr_ardecoder_unirec.yml
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
To download the full dataset, you need to merge the split files named `data.mdb.part_*` (located in `HWDB2Train`, `ch_pdf_lmdb`, and `en_pdf_lmdb`) into a single `data.mdb` file. Execute the commands below step by step:
|
| 93 |
+
|
| 94 |
+
```shell
|
| 95 |
+
# downloading full data
|
| 96 |
+
huggingface-cli download topdu/UniRec40M --repo-type dataset --local-dir ./UniRec40M/
|
| 97 |
+
cd UniRec40M/HWDB2Train/image_lmdb & cat data.mdb.part_* > data.mdb
|
| 98 |
+
cd UniRec40M/ch_pdf_lmdb & cat data.mdb.part_* > data.mdb
|
| 99 |
+
cd UniRec40M/en_pdf_lmdb & cat data.mdb.part_* > data.mdb
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
And modify the `configs/rec/unirec/focalsvtr_ardecoder_unirec.yml` file as follows:
|
| 103 |
+
|
| 104 |
+
```yaml
|
| 105 |
+
...
|
| 106 |
+
Train:
|
| 107 |
+
dataset:
|
| 108 |
+
name: NaSizeDataSet
|
| 109 |
+
divided_factor: ÷d_factor [64, 64] # w, h
|
| 110 |
+
max_side: &max_side [960, 1408] # [64*30, 64*44] # w, h [960, 1408] #
|
| 111 |
+
root_path: path/to/UniRec40M
|
| 112 |
+
add_return: True
|
| 113 |
+
zoom_min_factor: 4
|
| 114 |
+
use_zoom: True
|
| 115 |
+
all_data: True
|
| 116 |
+
test_data: False
|
| 117 |
+
use_aug: True
|
| 118 |
+
use_linedata: True
|
| 119 |
+
transforms:
|
| 120 |
+
- UniRecLabelEncode: # Class handling label
|
| 121 |
+
max_text_length: *max_text_length
|
| 122 |
+
vlmocr: True
|
| 123 |
+
tokenizer_path: *vlm_ocr_config # path to tokenizer, e.g. 'vocab.json', 'merges.txt'
|
| 124 |
+
- KeepKeys:
|
| 125 |
+
keep_keys: ['image', 'label', 'length'] # dataloader will return list in this order
|
| 126 |
+
sampler:
|
| 127 |
+
name: NaSizeSampler
|
| 128 |
+
# divide_factor: to ensure the width and height dimensions can be devided by downsampling multiple
|
| 129 |
+
min_bs: 1
|
| 130 |
+
max_bs: 24
|
| 131 |
+
loader:
|
| 132 |
+
shuffle: True
|
| 133 |
+
batch_size_per_card: 64
|
| 134 |
+
drop_last: True
|
| 135 |
+
num_workers: 8
|
| 136 |
+
...
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
## Citation
|
| 140 |
+
|
| 141 |
+
If you find our method useful for your research, please cite:
|
| 142 |
+
|
| 143 |
+
```bibtex
|
| 144 |
+
@article{du2025unirec,
|
| 145 |
+
title={UniRec-0.1B: Unified Text and Formula Recognition with 0.1B Parameters},
|
| 146 |
+
author={Yongkun Du and Zhineng Chen and Yazhen Xie and Weikang Bai and Hao Feng and Wei Shi and Yuchen Su and Can Huang and Yu-Gang Jiang},
|
| 147 |
+
journal={arXiv preprint arXiv:2512.21095},
|
| 148 |
+
year={2025}
|
| 149 |
+
}
|
| 150 |
+
```
|