Object Detection
FBAGSTM commited on
Commit
cde7157
·
verified ·
1 Parent(s): 057b134

Release AI-ModelZoo-4.0.0

Browse files
Files changed (1) hide show
  1. README.md +142 -6
README.md CHANGED
@@ -1,6 +1,142 @@
1
- ---
2
- license: other
3
- license_name: sla0044
4
- license_link: >-
5
- https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/LICENSE.md
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: sla0044
4
+ license_link: >-
5
+ https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/LICENSE.md
6
+ pipeline_tag: object-detection
7
+ ---
8
+ # **STYOLOMilli**
9
+
10
+ ## **Use case** : `Object detection`
11
+
12
+ ## **Model description**
13
+
14
+ STYOLOMilli is a **compact and efficient object detection model** designed for deployment on **resource‑constrained hardware** such as microcontroller units (MCUs) and neural processing units (NPUs). It is part of the STYOLO family introduced alongside STResNet in the paper [STResNet & STYOLO: A New Family of Compact Classification and Object Detection Models for MCUs](https://arxiv.org/abs/2601.05364).
15
+
16
+ The model builds on a highly compressed backbone derived from the STResNet design and integrates it within a YOLOX‑style architecture optimized for low memory usage, high efficiency, and competitive accuracy. STYOLOMilli delivers strong object detection performance (33.6 mAP on MS COCO) while maintaining a **very small model footprint**, outperforming or matching other micro sized detectors such as YOLOv5n and YOLOX‑Nano on the same tasks.
17
+
18
+ The `st_yolo_milli_pt` variant is implemented in **PyTorch** and is tuned for **ultra‑efficient inference** on edge and MCU environments.
19
+
20
+ ## **Network information**
21
+
22
+ | Network information | Value |
23
+ |--------------------|-------|
24
+ | Framework | Torch |
25
+ | Quantization | Int8 |
26
+ | Provenance | [STMicroelectronics Model Zoo Services](https://github.com/STMicroelectronics/stm32ai-modelzoo-services/tree/main/object_detection/object_detection) |
27
+ | Paper | [STResNet & STYOLO (arXiv:2601.05364)](https://arxiv.org/abs/2601.05364) |
28
+
29
+ The model is quantized to **int8** using **ONNX Runtime** and exported for efficient deployment.
30
+
31
+
32
+ ## Network inputs / outputs
33
+
34
+ For an image resolution of NxM and NC classes
35
+
36
+ | Input Shape | Description |
37
+ | ----- | ----------- |
38
+ | (1, W, H, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |
39
+
40
+ | Output Shape | Description |
41
+ | ----- | ----------- |
42
+ | (1, (W/8xH/8 + W/16xH/16 + W/32xH/32), (NC+1+4)) | Model returns bounding boxes with 6 values for each box, four coordinates (x1,y1,x2,y2), class confidence and objectness confidence |
43
+
44
+
45
+ ## Recommended Platforms
46
+
47
+ | Platform | Supported | Recommended |
48
+ |----------|-----------|-------------|
49
+ | STM32L0 | [] | [] |
50
+ | STM32L4 | [] | [] |
51
+ | STM32U5 | [] | [] |
52
+ | STM32H7 | [] | [] |
53
+ | STM32MP1 | [] | [] |
54
+ | STM32MP2 | [] | [] |
55
+ | STM32N6 | [x] | [x] |
56
+
57
+
58
+ # Performances
59
+
60
+ ## Metrics
61
+
62
+ Measures are done with default STEdgeAI Core configuration with enabled input / output allocated option.
63
+
64
+ ### Reference **NPU** memory footprint based on COCO dataset (see Accuracy for details on dataset)
65
+ | Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB) | STEdgeAI Core version |
66
+ |-------|---------|--------|------------|--------|-------------------|-------------------|--------------------|-----------------------|
67
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_192/st_yolodv2milli_actrelu_pt_coco_192_qdq_int8.onnx) | COCO | Int8 | 192x192x3 | STM32N6 | 1008.00 | 0 | 3170.29 | 3.0.0 |
68
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_320/st_yolodv2milli_actrelu_pt_coco_320_qdq_int8.onnx) | COCO | Int8 | 320x320x3 | STM32N6 | 2744.48 | 800.00 | 3182.67 | 3.0.0 |
69
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_640/st_yolodv2milli_actrelu_pt_coco_640_qdq_int8.onnx) | COCO | Int8 | 640x640x3 | STM32N6 | 2768.00 | 9600.00 | 3189.38 | 3.0.0 |
70
+
71
+ * 640x640 coco checkpoints are provided primarily for finetuning purposes as these checkpoints are trained on large dataset at higher resolution. Models with 640 resolution is not suitable for deployment.
72
+
73
+ ### Reference **NPU** inference time based on COCO dataset (see Accuracy for details on dataset)
74
+ | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STEdgeAI Core version |
75
+ |-------|---------|--------|------------|-------|------------------|--------------------|-----------|-----------------------|
76
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_192/st_yolodv2milli_actrelu_pt_coco_192_qdq_int8.onnx) | COCO | Int8 | 192x192x3 | STM32N6570-DK | NPU/MCU | 18.78 | 53.25 | 3.0.0 |
77
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_320/st_yolodv2milli_actrelu_pt_coco_320_qdq_int8.onnx) | COCO | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 70.57 | 14.17 | 3.0.0 |
78
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_640/st_yolodv2milli_actrelu_pt_coco_640_qdq_int8.onnx) | COCO | Int8 | 640x640x3 | STM32N6570-DK | NPU/MCU | 3417.54 | 0.29 | 3.0.0 |
79
+
80
+ ### Reference **NPU** memory footprint based on COCO Person dataset (see Accuracy for details on dataset)
81
+ | Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB) | STEdgeAI Core version |
82
+ |-------|---------|--------|------------|--------|-------------------|-------------------|--------------------|-----------------------|
83
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_192/st_yolodv2milli_actrelu_pt_coco_person_192_qdq_int8.onnx) | COCO-Person | Int8 | 192x192x3 | STM32N6 | 1008.00 | 0 | 3155.48 | 3.0.0 |
84
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_256/st_yolodv2milli_actrelu_pt_coco_person_256_qdq_int8.onnx) | COCO-Person | Int8 | 256x256x3 | STM32N6 | 2320.00 | 0 | 3164.48 | 3.0.0 |
85
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_320/st_yolodv2milli_actrelu_pt_coco_person_320_qdq_int8.onnx) | COCO-Person | Int8 | 320x320x3 | STM32N6 | 2743.88 | 800.00 | 3167.85 | 3.0.0 |
86
+
87
+ ### Reference **NPU** inference time based on COCO Person dataset (see Accuracy for details on dataset)
88
+ | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STEdgeAI Core version |
89
+ |-------|---------|--------|------------|-------|------------------|--------------------|-----------|-----------------------|
90
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_192/st_yolodv2milli_actrelu_pt_coco_person_192_qdq_int8.onnx) | COCO-Person | Int8 | 192x192x3 | STM32N6570-DK | NPU/MCU | 17.90 | 55.87 | 3.0.0 |
91
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_256/st_yolodv2milli_actrelu_pt_coco_person_256_qdq_int8.onnx) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 28.36 | 35.26 | 3.0.0 |
92
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_320/st_yolodv2milli_actrelu_pt_coco_person_320_qdq_int8.onnx) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 68.54 | 14.59 | 3.0.0 |
93
+
94
+
95
+
96
+ ### AP on COCO dataset
97
+
98
+ Dataset details: [link](https://cocodataset.org/#download) , License [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode), Number of classes: 80
99
+
100
+ | Model | Format | Resolution | AP50 |
101
+ | --- | --- | --- | --- |
102
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_192/st_yolodv2milli_actrelu_pt_coco_192.onnx) | Float | 3x192x192 | 35.42 |
103
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_192/st_yolodv2milli_actrelu_pt_coco_192_qdq_int8.onnx) | Int8 | 3x192x192 | 32.29 |
104
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_320/st_yolodv2milli_actrelu_pt_coco_320.onnx) | Float | 3x320x320 | 45.64 |
105
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_320/st_yolodv2milli_actrelu_pt_coco_320_qdq_int8.onnx) | Int8 | 3x320x320 | 43.79 |
106
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_640/st_yolodv2milli_actrelu_pt_coco_640.onnx) | Float | 3x640x640 | 52.64 |
107
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco/st_yolodv2milli_actrelu_pt_coco_640/st_yolodv2milli_actrelu_pt_coco_640_qdq_int8.onnx) | Int8 | 3x640x640 | 51.17 |
108
+
109
+ \* EVAL_IOU = 0.5, NMS_THRESH = 0.5, SCORE_THRESH = 0.001, MAX_DETECTIONS = 100
110
+
111
+ ### AP on COCO-Person dataset
112
+
113
+ Dataset details: [link](https://cocodataset.org/#download) , License [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode) , Number of classes: 1
114
+
115
+
116
+ | Model | Format | Resolution | AP50 |
117
+ | --- | --- | --- | --- |
118
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_192/st_yolodv2milli_actrelu_pt_coco_person_192.onnx) | Float | 3x192x192 | 61.71 |
119
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_192/st_yolodv2milli_actrelu_pt_coco_person_192_qdq_int8.onnx) | Int8 | 3x192x192 | 59.91 |
120
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_256/st_yolodv2milli_actrelu_pt_coco_person_256.onnx) | Float | 3x256x256 | 68.07 |
121
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_256/st_yolodv2milli_actrelu_pt_coco_person_256_qdq_int8.onnx) | Int8 | 3x256x256 | 66.16 |
122
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_320/st_yolodv2milli_actrelu_pt_coco_person_320.onnx) | Float | 3x320x320 | 72.12 |
123
+ | [st_yolodv2milli_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/st_yolodv2milli_pt/ST_pretrainedmodel_public_dataset/coco_person/st_yolodv2milli_actrelu_pt_coco_person_320/st_yolodv2milli_actrelu_pt_coco_person_320_qdq_int8.onnx) | Int8 | 3x320x320 | 70.91 |
124
+
125
+ \* EVAL_IOU = 0.5, NMS_THRESH = 0.5, SCORE_THRESH = 0.001, MAX_DETECTIONS = 100
126
+
127
+ ## Retraining and Integration in a simple example:
128
+
129
+ Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services)
130
+
131
+
132
+ ## References
133
+
134
+
135
+ - **STYOLO / STResNet paper**
136
+ [S. Sah & R. Kumar, *STResNet & STYOLO: A New Family of Compact Classification and Object Detection Models for MCUs*](https://arxiv.org/abs/2601.05364)
137
+
138
+ - **YOLOX (inspires STYOLO architecture)**
139
+ [Ge et al., *YOLOX: Exceeding YOLO Series in 2021*](https://arxiv.org/abs/2107.08430)
140
+
141
+ - **MS COCO dataset**
142
+ [Lin et al., *Microsoft COCO: Common Objects in Context*](https://arxiv.org/abs/1405.0312)