File size: 3,717 Bytes
2660950 455b393 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | ---
license: mit
language:
- en
pipeline_tag: object-detection
tags:
- YOLOv7
- YOLOv7-Face
---
# YOLOv7-FACE
This version of YOLOv7-FACE has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of AXera Platform](https://github.com/AXERA-TECH/ax-samples), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|cost|
|--|--|
|AX650| 12.6 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
root@ax650:~/YOLOv7-Face# tree
.
|-- ax650
| `-- yolov7-face.axmodel
|-- ax_yolov7_face
|-- selfie.jpg
`-- yolov7_face_out.jpg
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/YOLOv7-Face# ./ax_yolov7_face -m ax650/yolov7-face.axmodel -i selfie.jpg
--------------------------------------
model file : ax650/yolov7-face.axmodel
image file : selfie.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
post process cost time:8.70 ms
--------------------------------------
Repeat 1 times, avg time 12.59 ms, max_time 12.59 ms, min_time 12.59 ms
--------------------------------------
detection num: 174
0: 91%, [1137, 869, 1283, 1065], face
0: 91%, [1424, 753, 1570, 949], face
......
0: 45%, [1658, 362, 1677, 387], face
0: 45%, [1445, 437, 1467, 462], face
--------------------------------------
root@ax650:~/YOLOv7-Face#
```
Output image:

#### Inference with M.2 Accelerator card
```
(base) axera@raspberrypi:~/lhj/YOLOv7-Face $ ./axcl_aarch64/axcl_yolov7_face -m ax650/yolov7-face.axmodel -i selfie.jpg
--------------------------------------
model file : ax650/yolov7-face.axmodel
image file : selfie.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 3
name: 511
1 x 80 x 80 x 63
name: 520
1 x 40 x 40 x 63
name: 529
1 x 20 x 20 x 63
==================================================
Engine push input is done.
--------------------------------------
post process cost time:8.29 ms
--------------------------------------
Repeat 1 times, avg time 12.23 ms, max_time 12.23 ms, min_time 12.23 ms
--------------------------------------
detection num: 277
0: 91%, [1137, 869, 1283, 1065], face
0: 91%, [1424, 753, 1570, 949], face
0: 89%, [1305, 764, 1403, 900], face
0: 87%, [1738, 786, 1796, 860], face
......
0: 20%, [1120, 570, 1145, 604], face
0: 20%, [1025, 390, 1041, 413], face
--------------------------------------
```
Output image:
 |