--- pipeline_tag: image-to-image --- # SuperResolution This version of SuperResolution has been converted to run on the Axera NPU using **w8a8** quantization. This model has been optimized with the following LoRA: Compatible with Pulsar2 version: 4.2 ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through - [The github repo of AXera Platform](https://github.com/AXERA-TECH/SuperResolution.axera) - [How to Convert ONNX to axmodel](https://github.com/AXERA-TECH/SuperResolution.axera/tree/master/model_convert) - [Pulsar2 tools](https://huggingface.co/AXERA-TECH/Pulsar2) ## Support Platform - AX650 - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) |Chips|model|cost| |--|--|--| |AX650|EDSR|800 ms| | |ESPCN|22 ms| ## How to use Download all files from this repository to the device ``` root@ax650:~/SuperResolution# tree -L 1 . ├── assert ├── config.json ├── experiment ├── model_convert ├── python ├── README.md └── video 6 directories, 2 files ``` ### Requirements ``` pip install -r python/requirements.txt ``` pyaxengine 是 npu 的 python api,详细安装请参考: https://github.com/AXERA-TECH/pyaxengine ### Inference Input Data: ``` └── video └── test_1920x1080.mp4 ``` #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) ``` root@ax650 ~/SuperResolution # python python/run_axmodel.py --model model_convert/axmodel/edsr_baseline_x2_1.axmodel --scale 2 --dir_demo video/test_1920x1080.mp4 [INFO] Available providers: ['AxEngineExecutionProvider'] [INFO] Using provider: AxEngineExecutionProvider [INFO] Chip type: ChipType.MC50 [INFO] VNPU type: VNPUType.DISABLED [INFO] Engine version: 2.12.0s [INFO] Model type: 2 (triple core) [INFO] Compiler version: 4.2 6bff2f67 100%|█████████████████████████████████████████| 267/267 [10:06<00:00, 2.27s/it] Total time: 99.582 seconds for 267 frames Average time: 0.373 seconds for each frame ``` The output file in `experiment/test_1920x1080_x2.avi` ![Example Image](video/2.png) Output Data: ``` ├── experiment │   └── test_1920x1080_x2.avi ``` #### Inference with M.2 Accelerator card ```bash $ cd python $ python gradio_demo.py [INFO] Available providers: ['AXCLRTExecutionProvider'] [INFO] Using provider: AXCLRTExecutionProvider [INFO] SOC Name: AX650N [INFO] VNPU type: VNPUType.DISABLED [INFO] Compiler version: 4.2 6bff2f67 [INFO] Using provider: AXCLRTExecutionProvider [INFO] SOC Name: AX650N [INFO] VNPU type: VNPUType.DISABLED [INFO] Compiler version: 4.2 6bff2f67 * Running on local URL: http://0.0.0.0:7860 * To create a public link, set `share=True` in `launch()`. ``` Then use the M.2 Accelerator card IP instead of the 0.0.0.0, and use chrome open the URL: http://[your ip]:7860 ![gradio_demo](./assert/gradio_demo.jpg)