--- license: bsd-3-clause-clear language: - en tags: - Transformer - ONNX - ocr - mmocr - satrn --- # satrn [original repo](https://github.com/open-mmlab/mmocr/blob/main/configs/textrecog/satrn/README.md) ## Convert tools links: For those who are interested in model conversion, you can try to export onnx or axmodel through [satrn.axera](https://github.com/AXERA-TECH/satrn.axera) ## Installation ``` conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y conda activate open-mmlab pip3 install openmim git clone https://github.com/open-mmlab/mmocr.git cd mmocr mim install -e . ``` ## Support Platform - AX650 - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) The speed measurements(under different NPU configurations ) of the two parts of SATRN: (1) backbone+encoder (2) decoder ||backbone+encoder(ms)|decoder(ms)| |--|--|--| |NPU1|20.494|2.648| |NPU2|9.785|1.504| |NPU3|6.085|1.384| ## How to use Download all files from this repository to the device ``` . ├── axmodel │ ├── backbone_encoder.axmodel │ └── decoder.axmodel ├── demo_text_recog.jpg ├── onnx │ ├── satrn_backbone_encoder.onnx │ └── satrn_decoder_sim.onnx ├── README.md ├── run_axmodel.py ├── run_model.py └── run_onnx.py ``` ### python env requirement #### 1. pyaxengine https://github.com/AXERA-TECH/pyaxengine ``` wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl pip install axengine-0.1.1-py3-none-any.whl ``` #### 2. satrn [satrn installation](https://github.com/open-mmlab/mmocr/tree/main?tab=readme-ov-file#installation) #### Inference onnxmodel ``` python run_onnx.py ``` input: ![](demo_text_recog.jpg) output: ``` pred_text: STAR score: [0.9384028315544128, 0.9574984908103943, 0.9993689656257629, 0.9994958639144897] ``` #### Inference with AX650 Host check the [reference](https://github.com/AXERA-TECH/satrn.axera) for more information