File size: 1,659 Bytes
e679233 fafd9e7 48dd49e fafd9e7 e679233 fafd9e7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
license: bsd-3-clause
language:
- en
base_model:
- deeplabv3plus_mobilenet
pipeline_tag: image-segmentation
tags:
- deeplabv3plus
---
# DeepLabv3Plus
This version of deeplabv3plus_mobilenet has been converted to run on the Axera NPU using **w8a16** quantization.
Compatible with Pulsar2 version: 5.0-patch1
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of original](https://github.com/VainF/DeepLabV3Plus-Pytorch.git)
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX637
|Chips|Models |Time|
|--|--|--|
|AX650|deeplabv3plus_mobilenet_u16|13.4 ms |
|AX637|deeplabv3plus_mobilenet_u16|39.4 ms |
## How to use
Download all files from this repository to the device
### python env requirement
#### pyaxengine
https://github.com/AXERA-TECH/pyaxengine
```
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc2/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl
```
#### others
Maybe None.
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
Input image:

run
```
python3 infer.py --img samples/1_image.png --model models-ax637/deeplabv3plus_mobilenet_u16.axmodel
```
Output image:

|