Object Detection
FBAGSTM commited on
Commit
5bfd3bc
·
verified ·
1 Parent(s): 72d6154

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -6
README.md CHANGED
@@ -1,6 +1,135 @@
1
- ---
2
- license: other
3
- license_name: sla0044
4
- license_link: >-
5
- https://github.com/STMicroelectronics/stm32ai-modelzoo/raw/refs/heads/main/object_detection/yolov8n/LICENSE.md
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: sla0044
4
+ license_link: >-
5
+ https://github.com/STMicroelectronics/stm32ai-modelzoo/raw/refs/heads/main/object_detection/yolov8n/LICENSE.md
6
+ ---
7
+ # Yolov8n object detection quantized
8
+
9
+ ## **Use case** : `Object detection`
10
+
11
+ # Model description
12
+
13
+ Yolov8n is a lightweight and efficient object detection model designed for instance segmentation tasks. It is part of the YOLO (You Only Look Once) family of models, known for their real-time object detection capabilities. The "n" in Yolov8n_seg indicates that it is a nano version, optimized for speed and resource efficiency, making it suitable for deployment on devices with limited computational power, such as mobile devices and embedded systems.
14
+
15
+ Yolov8n is implemented in Pytorch by Ultralytics and is quantized in int8 format using tensorflow lite converter.
16
+
17
+ ## Network information
18
+
19
+
20
+ | Network information | Value |
21
+ |-------------------------|-----------------|
22
+ | Framework | TensorFlow Lite |
23
+ | Quantization | int8 |
24
+ | Provenance | https://docs.ultralytics.com/tasks/detect/ |
25
+
26
+
27
+ ## Networks inputs / outputs
28
+
29
+ With an image resolution of NxM and K classes to detect:
30
+
31
+ | Input Shape | Description |
32
+ | ----- | ----------- |
33
+ | (1, N, M, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |
34
+
35
+ | Output Shape | Description |
36
+ | ----- | ----------- |
37
+ | (1, 4+K, F) | FLOAT values Where F = (N/8)^2 + (N/16)^2 + (N/32)^2 is the 3 concatenated feature maps |
38
+
39
+
40
+ ## Recommended Platforms
41
+
42
+
43
+ | Platform | Supported | Recommended |
44
+ |----------|-----------|-------------|
45
+ | STM32L0 | [] | [] |
46
+ | STM32L4 | [] | [] |
47
+ | STM32U5 | [] | [] |
48
+ | STM32H7 | [] | [] |
49
+ | STM32MP1 | [] | [] |
50
+ | STM32MP2 | [x] | [x] |
51
+ | STM32N6 | [x] | [x] |
52
+
53
+
54
+ # Performances
55
+
56
+ ## Metrics
57
+
58
+ Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
59
+ > [!CAUTION]
60
+ > All YOLOv8 hyperlinks in the tables below link to an external GitHub folder, which is subject to its own license terms:
61
+ https://github.com/stm32-hotspot/ultralytics/blob/main/LICENSE
62
+ Please also check the folder's README.md file for detailed information about its use and content:
63
+ https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/README.md
64
+
65
+
66
+ ### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
67
+ | Model | Dataset | Format | Resolution | Series | Internal RAM | External RAM | Weights Flash | STM32Cube.AI version | STEdgeAI Core version |
68
+ |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|----------|--------------|----------|----------------|----------------|-----------------|------------------------|-------------------------|
69
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_192_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6 | 261 | 0 | 2936.52 | 10.2.0 | 2.2.0 |
70
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6 | 624 | 0 | 2941.09 | 10.2.0 | 2.2.0 |
71
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6 | 839.06 | 0 | 2947.02 | 10.2.0 | 2.2.0 |
72
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_416_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 416x416x3 | STM32N6 | 2242.84 | 0 | 2958.34 | 10.2.0 | 2.2.0 |
73
+ ### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
74
+ | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
75
+ |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|----------|--------------|---------------|--------------------|-----------------------|-------------|------------------------|-------------------------|
76
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_192_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 192x192x3 | STM32N6570-DK | NPU/MCU | 16.88 | 59.24 | 10.2.0 | 2.2.0 |
77
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 24.94 | 40.1 | 10.2.0 | 2.2.0 |
78
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 31.75 | 31.5 | 10.2.0 | 2.2.0 |
79
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_416_quant_pc_uf_od_coco-person.tflite) | COCO-Person | Int8 | 416x416x3 | STM32N6570-DK | NPU/MCU | 53.11 | 18.83 | 10.2.0 | 2.2.0 |
80
+ ### Reference **MPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
81
+ Model | Format | Resolution | Quantization | Board | Execution Engine | Frequency | Inference time (ms) | %NPU | %GPU | %CPU | X-LINUX-AI version | Framework |
82
+ |-----------|--------|------------|---------------|-------------------|------------------|-----------|---------------------|-------|-------|------|--------------------|-----------------------|
83
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-channel** | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 102.8 ms | 11.70 | 88.30 |0 | v6.1.0 | OpenVX |
84
+ | [YOLOv8n per tensor](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pt_uf_pose_coco-st.tflite) | Int8 | 256x256x3 | per-tensor | STM32MP257F-DK2 | NPU/GPU | 800 MHz | 17.57 ms | 86.79 | 13.21 |0 | v6.1.0 | OpenVX |
85
+
86
+ ** **To get the most out of MP25 NPU hardware acceleration, please use per-tensor quantization**
87
+
88
+ ### AP on COCO Person dataset
89
+
90
+
91
+ Dataset details: [link](https://cocodataset.org/#download) , License [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) , Quotation[[1]](#1) , Number of classes: 80, Number of images: 118,287
92
+
93
+
94
+ | Model | Format | Resolution | AP* |
95
+ |-------|--------|------------|----------------|
96
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_192_quant_pc_uf_od_coco-person.tflite) | Int8 | 192x192x3 | 53.50 % |
97
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_256_quant_pc_uf_od_coco-person.tflite) | Int8 | 256x256x3 | 58.40 % |
98
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite) | Int8 | 320x320x3 | 61.80 % |
99
+ | [YOLOv8n per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_416_quant_pc_uf_od_coco-person.tflite) | Int8 | 416x416x3 | 64.80 % |
100
+
101
+ \* EVAL_IOU = 0.5, NMS_THRESH = 0.5, SCORE_THRESH = 0.001, MAX_DETECTIONS = 100
102
+
103
+ ## Integration in a simple example and other services support:
104
+
105
+ Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services).
106
+ The models are stored in the Ultralytics repository. You can find them at the following link: [Ultralytics YOLOv8-STEdgeAI Models](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/).
107
+
108
+ Please refer to the [Ultralytics documentation](https://docs.ultralytics.com/tasks/detect/#train) to retrain the models.
109
+
110
+ # References
111
+
112
+ <a id="1">[1]</a>
113
+ “Microsoft COCO: Common Objects in Context”. [Online]. Available: https://cocodataset.org/#download.
114
+ @article{DBLP:journals/corr/LinMBHPRDZ14,
115
+ author = {Tsung{-}Yi Lin and
116
+ Michael Maire and
117
+ Serge J. Belongie and
118
+ Lubomir D. Bourdev and
119
+ Ross B. Girshick and
120
+ James Hays and
121
+ Pietro Perona and
122
+ Deva Ramanan and
123
+ Piotr Doll{'{a} }r and
124
+ C. Lawrence Zitnick},
125
+ title = {Microsoft {COCO:} Common Objects in Context},
126
+ journal = {CoRR},
127
+ volume = {abs/1405.0312},
128
+ year = {2014},
129
+ url = {http://arxiv.org/abs/1405.0312},
130
+ archivePrefix = {arXiv},
131
+ eprint = {1405.0312},
132
+ timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
133
+ biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
134
+ bibsource = {dblp computer science bibliography, https://dblp.org}
135
+ }