YOLOv7-Face / README.md
lihongjie
add axcl
455b393
---
license: mit
language:
- en
pipeline_tag: object-detection
tags:
- YOLOv7
- YOLOv7-Face
---
# YOLOv7-FACE
This version of YOLOv7-FACE has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of AXera Platform](https://github.com/AXERA-TECH/ax-samples), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|cost|
|--|--|
|AX650| 12.6 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
root@ax650:~/YOLOv7-Face# tree
.
|-- ax650
| `-- yolov7-face.axmodel
|-- ax_yolov7_face
|-- selfie.jpg
`-- yolov7_face_out.jpg
```
### Inference
Input image:
![](./selfie.jpg)
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/YOLOv7-Face# ./ax_yolov7_face -m ax650/yolov7-face.axmodel -i selfie.jpg
--------------------------------------
model file : ax650/yolov7-face.axmodel
image file : selfie.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
post process cost time:8.70 ms
--------------------------------------
Repeat 1 times, avg time 12.59 ms, max_time 12.59 ms, min_time 12.59 ms
--------------------------------------
detection num: 174
0: 91%, [1137, 869, 1283, 1065], face
0: 91%, [1424, 753, 1570, 949], face
......
0: 45%, [1658, 362, 1677, 387], face
0: 45%, [1445, 437, 1467, 462], face
--------------------------------------
root@ax650:~/YOLOv7-Face#
```
Output image:
![](./yolov7_face_out.jpg)
#### Inference with M.2 Accelerator card
```
(base) axera@raspberrypi:~/lhj/YOLOv7-Face $ ./axcl_aarch64/axcl_yolov7_face -m ax650/yolov7-face.axmodel -i selfie.jpg
--------------------------------------
model file : ax650/yolov7-face.axmodel
image file : selfie.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 3
name: 511
1 x 80 x 80 x 63
name: 520
1 x 40 x 40 x 63
name: 529
1 x 20 x 20 x 63
==================================================
Engine push input is done.
--------------------------------------
post process cost time:8.29 ms
--------------------------------------
Repeat 1 times, avg time 12.23 ms, max_time 12.23 ms, min_time 12.23 ms
--------------------------------------
detection num: 277
0: 91%, [1137, 869, 1283, 1065], face
0: 91%, [1424, 753, 1570, 949], face
0: 89%, [1305, 764, 1403, 900], face
0: 87%, [1738, 786, 1796, 860], face
......
0: 20%, [1120, 570, 1145, 604], face
0: 20%, [1025, 390, 1041, 413], face
--------------------------------------
```
Output image:
![](./yolov7_face_axcl_out.jpg)