Update README.md
Browse files
README.md
CHANGED
|
@@ -74,7 +74,10 @@ For an image resolution of NxM and P classes
|
|
| 74 |
|
| 75 |
## Metrics
|
| 76 |
|
| 77 |
-
Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
|
| 80 |
### Reference **NPU** memory footprint on food-101 and ImageNet dataset (see Accuracy for details on dataset)
|
|
@@ -186,7 +189,7 @@ Dataset details: [link](https://data.mendeley.com/datasets/tywbtsjrjv/1) , Licen
|
|
| 186 |
|
| 187 |
### Accuracy with Food-101 dataset
|
| 188 |
|
| 189 |
-
Dataset details: [link](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/) ,
|
| 190 |
|
| 191 |
| Model | Format | Resolution | Top 1 Accuracy |
|
| 192 |
|-------|--------|------------|----------------|
|
|
@@ -223,7 +226,7 @@ Dataset details: [link](https://cocodataset.org/) , License [Creative Commons At
|
|
| 223 |
|
| 224 |
### Accuracy with ImageNet
|
| 225 |
|
| 226 |
-
Dataset details: [link](https://www.image-net.org),
|
| 227 |
Number of classes: 1000.
|
| 228 |
To perform the quantization, we calibrated the activations with a random subset of the training set.
|
| 229 |
For the sake of simplicity, the accuracy reported here was estimated on the 50000 labelled images of the validation set.
|
|
|
|
| 74 |
|
| 75 |
## Metrics
|
| 76 |
|
| 77 |
+
- Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
|
| 78 |
+
- `tfs` stands for "training from scratch", meaning that the model weights were randomly initialized before training.
|
| 79 |
+
- `tl` stands for "transfer learning", meaning that the model backbone weights were initialized from a pre-trained model, then only the last layer was unfrozen during the training.
|
| 80 |
+
- `fft` stands for "full fine-tuning", meaning that the full model weights were initialized from a transfer learning pre-trained model, and all the layers were unfrozen during the training.
|
| 81 |
|
| 82 |
|
| 83 |
### Reference **NPU** memory footprint on food-101 and ImageNet dataset (see Accuracy for details on dataset)
|
|
|
|
| 189 |
|
| 190 |
### Accuracy with Food-101 dataset
|
| 191 |
|
| 192 |
+
Dataset details: [link](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/) , Quotation[[3]](#3) , Number of classes: 101 , Number of images: 101 000
|
| 193 |
|
| 194 |
| Model | Format | Resolution | Top 1 Accuracy |
|
| 195 |
|-------|--------|------------|----------------|
|
|
|
|
| 226 |
|
| 227 |
### Accuracy with ImageNet
|
| 228 |
|
| 229 |
+
Dataset details: [link](https://www.image-net.org), Quotation[[4]](#4)
|
| 230 |
Number of classes: 1000.
|
| 231 |
To perform the quantization, we calibrated the activations with a random subset of the training set.
|
| 232 |
For the sake of simplicity, the accuracy reported here was estimated on the 50000 labelled images of the validation set.
|