|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
- tr |
|
|
--- |
|
|
|
|
|
# PaddleOCR Mobile Quantized Models (ONNX) |
|
|
|
|
|
|
|
|
## Overview |
|
|
This repo hosts four **ONNX** models converted from PaddleOCR mobile checkpoints |
|
|
|
|
|
| File | Task | Language scope | Input shape | |
|
|
|------|------|----------------|-------------| |
|
|
| `Multilingual_PP-OCRv3_det_infer.onnx` | Text-detection | 80+ scripts | **NCHW • 1×3×H×W** | |
|
|
| `PP-OCRv3_mobile_det_infer.onnx` | Text-detection | Latin only | 1×3×H×W | |
|
|
| `ch_ppocr_mobile_v2.0_cls_infer.onnx` | Angle classifier | Chinese/Latin | 1×3×H×W | |
|
|
| `latin_PP-OCRv3_mobile_rec_infer.onnx` | Text-recognition | Latin | 1×3×H×W | |
|
|
|
|
|
All models were: |
|
|
* exported with **paddle2onnx 1.2.3** (`opset 11`) |
|
|
* simplified via **onnx-simplifier 0.4+** |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
```python |
|
|
import onnxruntime as ort, numpy as np |
|
|
img = np.random.rand(1, 3, 224, 224).astype("float32") |
|
|
|
|
|
det = ort.InferenceSession("Multilingual_PP-OCRv3_det_infer.onnx") |
|
|
cls = ort.InferenceSession("ch_ppocr_mobile_v2.0_cls_infer.onnx") |
|
|
rec = ort.InferenceSession("latin_PP-OCRv3_mobile_rec_infer.onnx") |
|
|
|
|
|
det_out = det.run(None, {det.get_inputs()[0].name: img})[0] |
|
|
# add your post-processing / cropping / decoding here … |
|
|
|