jemartin commited on
Commit
d61942f
·
verified ·
1 Parent(s): 84710da

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ model_name: zfnet512-3.onnx
5
+ tags:
6
+ - validated
7
+ - vision
8
+ - classification
9
+ - zfnet-512
10
+ ---
11
+ <!--- SPDX-License-Identifier: MIT -->
12
+
13
+ # ZFNet-512
14
+
15
+ |Model |Download |Download (with sample test data)| ONNX version |Opset version|Top-1 accuracy (%)|Top-5 accuracy (%)|
16
+ | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
17
+ |ZFNet-512| [341 MB](model/zfnet512-3.onnx) | [320 MB](model/zfnet512-3.tar.gz) | 1.1 | 3| | |
18
+ |ZFNet-512| [341 MB](model/zfnet512-6.onnx) | [320 MB](model/zfnet512-6.tar.gz) | 1.1.2 | 6| | |
19
+ |ZFNet-512| [341 MB](model/zfnet512-7.onnx) | [320 MB](model/zfnet512-7.tar.gz) | 1.2 | 7| | |
20
+ |ZFNet-512| [341 MB](model/zfnet512-8.onnx) | [318 MB](model/zfnet512-8.tar.gz) | 1.3 | 8| | |
21
+ |ZFNet-512| [341 MB](model/zfnet512-9.onnx) | [318 MB](model/zfnet512-9.tar.gz) | 1.4 | 9| | |
22
+ |ZFNet-512| [333 MB](model/zfnet512-12.onnx) | [309 MB](model/zfnet512-12.tar.gz) | 1.9 | 12|55.97|79.41|
23
+ |ZFNet-512-int8| [83 MB](model/zfnet512-12-int8.onnx) | [48 MB](model/zfnet512-12-int8.tar.gz) | 1.9 | 12|55.84|79.33|
24
+ |ZFNet-512-qdq| [84 MB](model/zfnet512-12-qdq.onnx) | [56 MB](model/zfnet512-12-qdq.tar.gz) | 1.9 | 12|55.83|79.42|
25
+ > Compared with the fp32 ZFNet-512, int8 ZFNet-512's Top-1 accuracy drop ratio is 0.23%, Top-5 accuracy drop ratio is 0.10% and performance improvement is 1.78x.
26
+ >
27
+ > **Note**
28
+ >
29
+ > Different preprocess methods will lead to different accuracies, the accuracy in table depends on this specific [preprocess method](https://github.com/intel-innersource/frameworks.ai.lpot.intel-lpot/blob/master/examples/onnxrt/onnx_model_zoo/zfnet/main.py).
30
+ >
31
+ > The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
32
+
33
+ ## Description
34
+ ZFNet-512 is a deep convolutional networks for classification.
35
+ This model's 4th layer has 512 maps instead of 1024 maps mentioned in the paper.
36
+
37
+ ### Dataset
38
+ [ILSVRC2013](http://www.image-net.org/challenges/LSVRC/2013/)
39
+
40
+ ## Source
41
+ Caffe2 ZFNet-512 ==> ONNX ZFNet-512
42
+
43
+ ## Model input and output
44
+ ### Input
45
+ ```
46
+ gpu_0/data_0: float[1, 3, 224, 224]
47
+ ```
48
+ ### Output
49
+ ```
50
+ gpu_0/softmax_1: float[1, 1000]
51
+ ```
52
+ ### Pre-processing steps
53
+ ### Post-processing steps
54
+ ### Sample test data
55
+ random generated sampe test data:
56
+ - test_data_set_0
57
+ - test_data_set_1
58
+ - test_data_set_2
59
+ - test_data_set_3
60
+ - test_data_set_4
61
+ - test_data_set_5
62
+
63
+ ## Results/accuracy on test set
64
+
65
+ ## Quantization
66
+ ZFNet-512-int8 and ZFNet-512-qdq are obtained by quantizing fp32 ZFNet-512 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/zfnet/quantization/ptq/README.md) to understand how to use Intel® Neural Compressor for quantization.
67
+
68
+ ### Environment
69
+ onnx: 1.9.0
70
+ onnxruntime: 1.8.0
71
+
72
+ ### Prepare model
73
+ ```shell
74
+ wget https://github.com/onnx/models/raw/main/vision/classification/zfnet-512/model/zfnet512-12.onnx
75
+ ```
76
+
77
+ ### Model quantize
78
+ Make sure to specify the appropriate dataset path in the configuration file.
79
+ ```bash
80
+ bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
81
+ --config=zfnet512.yaml \
82
+ --data_path=/path/to/imagenet \
83
+ --label_path=/path/to/imagenet/label \
84
+ --output_model=path/to/save
85
+ ```
86
+
87
+ ## References
88
+ * [Visualizing and Understanding Convolutional Networks](https://arxiv.org/abs/1311.2901)
89
+
90
+ * [Intel® Neural Compressor](https://github.com/intel/neural-compressor)
91
+
92
+ ## Contributors
93
+ * [mengniwang95](https://github.com/mengniwang95) (Intel)
94
+ * [yuwenzho](https://github.com/yuwenzho) (Intel)
95
+ * [airMeng](https://github.com/airMeng) (Intel)
96
+ * [ftian1](https://github.com/ftian1) (Intel)
97
+ * [hshen14](https://github.com/hshen14) (Intel)
98
+
99
+ ## License
100
+ MIT
101
+