Image Classification
FBAGSTM commited on
Commit
1218a5a
·
verified ·
1 Parent(s): dc8757e

Release AI-ModelZoo-4.0.0:

Browse files
Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,3 +1,111 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-classification
4
+ ---
5
+ # HardNet
6
+
7
+ ## **Use case** : `Image classification`
8
+
9
+ # Model description
10
+
11
+
12
+ Harmonic DenseNet (HardNet) is a memory-efficient variant of DenseNet that optimizes for both **computational efficiency and memory access cost**. It introduces a harmonic pattern in the dense connections to reduce redundant feature computations.
13
+
14
+ HardNet features **harmonic dense connections** that reduce connection patterns to minimize memory bandwidth, while maintaining the benefits of DenseNet's feature reuse. The architecture combines **depthwise separable convolutions** with dense blocks for enhanced efficiency.
15
+
16
+ Designed for practical hardware deployment, HardNet provides DenseNet-like feature richness with lower memory cost on edge devices.
17
+
18
+ (source: https://arxiv.org/abs/1909.00948)
19
+
20
+ The model is quantized to **int8** using **ONNX Runtime** and exported for efficient deployment.
21
+
22
+ ## Network information
23
+
24
+
25
+ | Network Information | Value |
26
+ |--------------------|-------|
27
+ | Framework | Torch |
28
+ | MParams | ~3.43 M |
29
+ | Quantization | Int8 |
30
+ | Provenance | https://github.com/PingoLH/Pytorch-HarDNet |
31
+ | Paper | https://arxiv.org/abs/1909.00948 |
32
+
33
+ ## Network inputs / outputs
34
+
35
+
36
+ For an image resolution of NxM and P classes
37
+
38
+ | Input Shape | Description |
39
+ | ----- | ----------- |
40
+ | (1, N, M, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |
41
+
42
+ | Output Shape | Description |
43
+ | ----- | ----------- |
44
+ | (1, P) | Per-class confidence for P classes in FLOAT32|
45
+
46
+
47
+ ## Recommended platforms
48
+
49
+
50
+ | Platform | Supported | Recommended |
51
+ |----------|-----------|-----------|
52
+ | STM32L0 |[]|[]|
53
+ | STM32L4 |[]|[]|
54
+ | STM32U5 |[]|[]|
55
+ | STM32H7 |[]|[]|
56
+ | STM32MP1 |[]|[]|
57
+ | STM32MP2 |[]|[]|
58
+ | STM32N6 |[x]|[x]|
59
+
60
+ # Performances
61
+
62
+ ## Metrics
63
+
64
+ - Measures are done with default STEdgeAI Core configuration with enabled input / output allocated option.
65
+ - All the models are trained from scratch on Imagenet dataset
66
+
67
+ ### Reference **NPU** memory footprint on Imagenet dataset (see Accuracy for details on dataset)
68
+ | Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB) | STEdgeAI Core version |
69
+ |-------|---------|--------|------------|--------|--------------|--------------|---------------|----------------------|
70
+ | [hardnet39ds_pt_224](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Imagenet | Int8 | 224×224×3 | STM32N6 | 1476.12 | 0 | 3516.67 | 3.0.0 |
71
+
72
+
73
+
74
+ ### Reference **NPU** inference time on Imagenet dataset (see Accuracy for details on dataset)
75
+ | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STEdgeAI Core version |
76
+ |-------|---------|--------|--------|------------|-------|-----------------|-------------------|---------------------|
77
+ | [hardnet39ds_pt_224](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Imagenet | Int8 | 224×224×3 | STM32N6570-DK | NPU/MCU | 65.81 | 15.19 | 3.0.0 |
78
+
79
+
80
+
81
+ ### Accuracy with Imagenet dataset
82
+
83
+ | Model | Format | Resolution | Top 1 Accuracy |
84
+ | --- | --- | --- | --- |
85
+ | [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224.onnx) | Float | 224x224x3 | 74.38 % |
86
+ | [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Int8 | 224x224x3 | 73.61 % |
87
+
88
+
89
+ Dataset details: [link](https://www.image-net.org)
90
+ Number of classes: 1000.
91
+ To perform the quantization, we calibrated the activations with a random subset of the training set.
92
+ For the sake of simplicity, the accuracy reported here was estimated on the 50000 labelled images of the validation set.
93
+
94
+ | Model | Format | Resolution | Top 1 Accuracy |
95
+ | --- | --- | --- | --- |
96
+ | [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224.onnx) | Float | 224x224x3 | 74.38 % |
97
+ | [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Int8 | 224x224x3 | 73.61 % |
98
+
99
+
100
+
101
+ ## Retraining and Integration in a simple example:
102
+
103
+ Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services)
104
+
105
+
106
+
107
+ # References
108
+
109
+ <a id="1">[1]</a> - **Dataset**: Imagenet (ILSVRC 2012) — https://www.image-net.org/
110
+
111
+ <a id="2">[2]</a> - **Model**: HarDNet — https://github.com/PingoLH/Pytorch-HarDNet