Release AI-ModelZoo-4.0.0
Browse files
README.md
CHANGED
|
@@ -38,7 +38,7 @@ Yolov8n_seg is implemented in Pytorch by Ultralytics and is quantized in int8 fo
|
|
| 38 |
# Performances
|
| 39 |
|
| 40 |
## Metrics
|
| 41 |
-
Measures are done with default
|
| 42 |
> [!CAUTION]
|
| 43 |
> All YOLOv8 hyperlinks in the tables below link to an external GitHub folder, which is subject to its own license terms:
|
| 44 |
https://github.com/stm32-hotspot/ultralytics/blob/main/LICENSE
|
|
@@ -48,20 +48,29 @@ https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/
|
|
| 48 |
|
| 49 |
### Reference **NPU** memory footprint based on COCO dataset
|
| 50 |
|
| 51 |
-
|Model | Dataset | Format | Resolution | Series | Internal RAM (KiB)| External RAM (KiB)| Weights Flash (KiB) |
|
| 52 |
-
|----------|------------------|--------|-------------|------------------|------------------|---------------------|-------
|
| 53 |
-
| [Yolov8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_ii_seg_coco-st.tflite) | COCO | Int8 | 256x256x3 | STM32N6 |
|
| 54 |
-
| [Yolov8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_ii_seg_coco-st.tflite) | COCO | Int8 | 320x320x3 | STM32N6 |
|
| 55 |
|
| 56 |
|
| 57 |
|
| 58 |
### Reference **NPU** inference time based on COCO Person dataset
|
| 59 |
-
| Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec |
|
| 60 |
|--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 61 |
-
| [YOLOv8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_ii_seg_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU |
|
| 62 |
-
| [YOLOv8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_ii_seg_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU |
|
| 63 |
|
|
|
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
## Retraining and Integration in a Simple Example
|
| 67 |
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services).
|
|
@@ -74,4 +83,4 @@ Please refer to the [Ultralytics documentation](https://docs.ultralytics.com/tas
|
|
| 74 |
|
| 75 |
<a id="1">[1]</a> T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft COCO: Common Objects in Context." European Conference on Computer Vision (ECCV), 2014. [Link](https://arxiv.org/abs/1405.0312)
|
| 76 |
|
| 77 |
-
<a id="2">[2]</a> Ultralytics, "YOLOv8: Next-Generation Object Detection and Segmentation Model." Ultralytics, 2023. [Link](https://github.com/ultralytics/ultralytics)
|
|
|
|
| 38 |
# Performances
|
| 39 |
|
| 40 |
## Metrics
|
| 41 |
+
Measures are done with default STEdgeAI Core version configuration with enabled input / output allocated option.
|
| 42 |
> [!CAUTION]
|
| 43 |
> All YOLOv8 hyperlinks in the tables below link to an external GitHub folder, which is subject to its own license terms:
|
| 44 |
https://github.com/stm32-hotspot/ultralytics/blob/main/LICENSE
|
|
|
|
| 48 |
|
| 49 |
### Reference **NPU** memory footprint based on COCO dataset
|
| 50 |
|
| 51 |
+
|Model | Dataset | Format | Resolution | Series | Internal RAM (KiB)| External RAM (KiB)| Weights Flash (KiB) | STEdgeAI Core version |
|
| 52 |
+
|----------|------------------|--------|-------------|------------------|------------------|---------------------|-------------------------|
|
| 53 |
+
| [Yolov8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_ii_seg_coco-st.tflite) | COCO | Int8 | 256x256x3 | STM32N6 | 855 | 0.0 | 3393.42 | 3.0.0 |
|
| 54 |
+
| [Yolov8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_ii_seg_coco-st.tflite) | COCO | Int8 | 320x320x3 | STM32N6 | 1413.89 | 0.0 | 3435.34 | 3.0.0 |
|
| 55 |
|
| 56 |
|
| 57 |
|
| 58 |
### Reference **NPU** inference time based on COCO Person dataset
|
| 59 |
+
| Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STEdgeAI Core version |
|
| 60 |
|--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 61 |
+
| [YOLOv8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_ii_seg_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 31.57 | 29.72 | 3.0.0 |
|
| 62 |
+
| [YOLOv8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_ii_seg_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 41.87 | 22.83 | 3.0.0 |
|
| 63 |
|
| 64 |
+
### Reference **MPU** inference time based on COCO 2017 Person dataset (instance segmentation)
|
| 65 |
|
| 66 |
+
| Model | Dataset | Format | Resolution | Quantization | Board | Execution Engine | Frequency | Inference time (ms) | %NPU | %GPU | %CPU | X-LINUX-AI version | Framework |
|
| 67 |
+
|----------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|--------|------------|----------------|-----------------|------------------|-----------|---------------------|-------|------|------|--------------------|-------------------|
|
| 68 |
+
| [YOLOv8n-seg](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_ii_seg_coco-st.tflite) | person_coco_2017 | Int8 | 256x256x3 | per-channel\*\* | STM32MP257F-EV1 | NPU/GPU | 800 MHz | 19.84 | 91.71 | 8.29 | 0 | v6.1.0 | TensorFlow Lite |
|
| 69 |
+
| [YOLOv8n-seg](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_ii_seg_coco-st.tflite) | person_coco_2017 | Int8 | 320x320x3 | per-channel\*\* | STM32MP257F-EV1 | NPU/GPU | 800 MHz | 30.97 | 93.59 | 6.41 | 0 | v6.1.0 | TensorFlow Lite |
|
| 70 |
+
|
| 71 |
+
** **To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization**
|
| 72 |
+
|
| 73 |
+
** **Note:** On STM32MP2 devices, per-channel quantized models are internally converted to per-tensor quantization by the compiler using an entropy-based method. This may introduce a slight loss in accuracy compared to the original per-channel models.
|
| 74 |
|
| 75 |
## Retraining and Integration in a Simple Example
|
| 76 |
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services).
|
|
|
|
| 83 |
|
| 84 |
<a id="1">[1]</a> T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft COCO: Common Objects in Context." European Conference on Computer Vision (ECCV), 2014. [Link](https://arxiv.org/abs/1405.0312)
|
| 85 |
|
| 86 |
+
<a id="2">[2]</a> Ultralytics, "YOLOv8: Next-Generation Object Detection and Segmentation Model." Ultralytics, 2023. [Link](https://github.com/ultralytics/ultralytics)
|