qaihm-bot commited on
Commit
c1f2411
·
verified ·
1 Parent(s): 5022566

See https://github.com/quic/ai-hub-models/releases/v0.46.1 for changelog.

MobileNet-v3-Small_float.dlc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:03d41317c666023023a08bd5046ef179d3eecea5222e1a618dadff21e2d20a0a
3
- size 10303388
 
 
 
 
MobileNet-v3-Small_float.onnx.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:dfe68a35128a659cc4bdfe12a8e941ba7c590479b9367e5ca731600fe623131a
3
- size 9443785
 
 
 
 
MobileNet-v3-Small_float.tflite DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0cd846648051ddf2c0036f17fb9bd6e35cbdb7559b623c3cc6e672048da63f4e
3
- size 10184108
 
 
 
 
README.md CHANGED
@@ -12,244 +12,107 @@ pipeline_tag: image-classification
12
 
13
  ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/web-assets/model_demo.png)
14
 
15
- # MobileNet-v3-Small: Optimized for Mobile Deployment
16
- ## Imagenet classifier and general purpose backbone
17
-
18
 
19
  MobileNetV3Small is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
20
 
21
- This model is an implementation of MobileNet-v3-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
22
-
23
-
24
- This repository provides scripts to run MobileNet-v3-Small on Qualcomm® devices.
25
- More details on model performance across various devices, can be found
26
- [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
27
-
28
-
29
-
30
- ### Model Details
31
-
32
- - **Model Type:** Model_use_case.image_classification
33
- - **Model Stats:**
34
- - Model checkpoint: Imagenet
35
- - Input resolution: 224x224
36
- - Number of parameters: 2.54M
37
- - Model size (float): 9.71 MB
38
-
39
- | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
40
- |---|---|---|---|---|---|---|---|---|
41
- | MobileNet-v3-Small | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.067 ms | 0 - 124 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
42
- | MobileNet-v3-Small | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.979 ms | 1 - 123 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
43
- | MobileNet-v3-Small | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.442 ms | 0 - 151 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
44
- | MobileNet-v3-Small | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.426 ms | 1 - 147 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
45
- | MobileNet-v3-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.789 ms | 0 - 3 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
46
- | MobileNet-v3-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.763 ms | 1 - 3 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
47
- | MobileNet-v3-Small | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 0.711 ms | 0 - 8 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
48
- | MobileNet-v3-Small | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.091 ms | 0 - 124 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
49
- | MobileNet-v3-Small | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.056 ms | 1 - 123 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
50
- | MobileNet-v3-Small | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.067 ms | 0 - 124 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
51
- | MobileNet-v3-Small | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.979 ms | 1 - 123 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
52
- | MobileNet-v3-Small | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.41 ms | 0 - 131 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
53
- | MobileNet-v3-Small | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.391 ms | 0 - 130 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
54
- | MobileNet-v3-Small | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.091 ms | 0 - 124 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
55
- | MobileNet-v3-Small | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.056 ms | 1 - 123 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
56
- | MobileNet-v3-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.52 ms | 0 - 146 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
57
- | MobileNet-v3-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.516 ms | 1 - 146 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
58
- | MobileNet-v3-Small | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.454 ms | 0 - 119 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
59
- | MobileNet-v3-Small | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | TFLITE | 0.402 ms | 0 - 128 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
60
- | MobileNet-v3-Small | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | QNN_DLC | 0.391 ms | 0 - 128 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
61
- | MobileNet-v3-Small | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 0.388 ms | 0 - 100 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
62
- | MobileNet-v3-Small | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | TFLITE | 0.325 ms | 0 - 128 MB | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
63
- | MobileNet-v3-Small | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | QNN_DLC | 0.324 ms | 0 - 127 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
64
- | MobileNet-v3-Small | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | ONNX | 0.357 ms | 0 - 100 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
65
- | MobileNet-v3-Small | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.92 ms | 1 - 1 MB | NPU | [MobileNet-v3-Small.dlc](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.dlc) |
66
- | MobileNet-v3-Small | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.671 ms | 5 - 5 MB | NPU | [MobileNet-v3-Small.onnx.zip](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx.zip) |
67
-
68
-
69
-
70
-
71
- ## Installation
72
-
73
-
74
- Install the package via pip:
75
- ```bash
76
- pip install qai-hub-models
77
- ```
78
-
79
-
80
- ## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device
81
-
82
- Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your
83
- Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
84
-
85
- With this API token, you can configure your client to run models on the cloud
86
- hosted devices.
87
- ```bash
88
- qai-hub configure --api_token API_TOKEN
89
- ```
90
- Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.
91
-
92
-
93
-
94
- ## Demo off target
95
-
96
- The package contains a simple end-to-end demo that downloads pre-trained
97
- weights and runs this model on a sample input.
98
-
99
- ```bash
100
- python -m qai_hub_models.models.mobilenet_v3_small.demo
101
- ```
102
-
103
- The above demo runs a reference implementation of pre-processing, model
104
- inference, and post processing.
105
-
106
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
107
- environment, please add the following to your cell (instead of the above).
108
- ```
109
- %run -m qai_hub_models.models.mobilenet_v3_small.demo
110
- ```
111
-
112
-
113
- ### Run model on a cloud-hosted device
114
-
115
- In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
116
- device. This script does the following:
117
- * Performance check on-device on a cloud-hosted device
118
- * Downloads compiled assets that can be deployed on-device for Android.
119
- * Accuracy check between PyTorch and on-device outputs.
120
-
121
- ```bash
122
- python -m qai_hub_models.models.mobilenet_v3_small.export
123
- ```
124
-
125
-
126
-
127
- ## How does this work?
128
-
129
- This [export script](https://aihub.qualcomm.com/models/mobilenet_v3_small/qai_hub_models/models/MobileNet-v3-Small/export.py)
130
- leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
131
- on-device. Lets go through each step below in detail:
132
-
133
- Step 1: **Compile model for on-device deployment**
134
-
135
- To compile a PyTorch model for on-device deployment, we first trace the model
136
- in memory using the `jit.trace` and then call the `submit_compile_job` API.
137
-
138
- ```python
139
- import torch
140
-
141
- import qai_hub as hub
142
- from qai_hub_models.models.mobilenet_v3_small import Model
143
-
144
- # Load the model
145
- torch_model = Model.from_pretrained()
146
-
147
- # Device
148
- device = hub.Device("Samsung Galaxy S25")
149
-
150
- # Trace model
151
- input_shape = torch_model.get_input_spec()
152
- sample_inputs = torch_model.sample_inputs()
153
-
154
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
155
-
156
- # Compile model on a specific device
157
- compile_job = hub.submit_compile_job(
158
- model=pt_model,
159
- device=device,
160
- input_specs=torch_model.get_input_spec(),
161
- )
162
-
163
- # Get target model to run on-device
164
- target_model = compile_job.get_target_model()
165
-
166
- ```
167
-
168
-
169
- Step 2: **Performance profiling on cloud-hosted device**
170
-
171
- After compiling models from step 1. Models can be profiled model on-device using the
172
- `target_model`. Note that this scripts runs the model on a device automatically
173
- provisioned in the cloud. Once the job is submitted, you can navigate to a
174
- provided job URL to view a variety of on-device performance metrics.
175
- ```python
176
- profile_job = hub.submit_profile_job(
177
- model=target_model,
178
- device=device,
179
- )
180
-
181
- ```
182
-
183
- Step 3: **Verify on-device accuracy**
184
-
185
- To verify the accuracy of the model on-device, you can run on-device inference
186
- on sample input data on the same cloud hosted device.
187
- ```python
188
- input_data = torch_model.sample_inputs()
189
- inference_job = hub.submit_inference_job(
190
- model=target_model,
191
- device=device,
192
- inputs=input_data,
193
- )
194
- on_device_output = inference_job.download_output_data()
195
-
196
- ```
197
- With the output of the model, you can compute like PSNR, relative errors or
198
- spot check the output with expected output.
199
-
200
- **Note**: This on-device profiling and inference requires access to Qualcomm®
201
- AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
202
-
203
-
204
-
205
- ## Run demo on a cloud-hosted device
206
-
207
- You can also run the demo on-device.
208
-
209
- ```bash
210
- python -m qai_hub_models.models.mobilenet_v3_small.demo --eval-mode on-device
211
- ```
212
-
213
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
214
- environment, please add the following to your cell (instead of the above).
215
- ```
216
- %run -m qai_hub_models.models.mobilenet_v3_small.demo -- --eval-mode on-device
217
- ```
218
-
219
-
220
- ## Deploying compiled model to Android
221
-
222
-
223
- The models can be deployed using multiple runtimes:
224
- - TensorFlow Lite (`.tflite` export): [This
225
- tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
226
- guide to deploy the .tflite model in an Android application.
227
-
228
-
229
- - QNN (`.so` export ): This [sample
230
- app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
231
- provides instructions on how to use the `.so` shared library in an Android application.
232
-
233
-
234
- ## View on Qualcomm® AI Hub
235
- Get more details on MobileNet-v3-Small's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
236
- Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
237
-
238
 
239
  ## License
240
  * The license for the original implementation of MobileNet-v3-Small can be found
241
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
242
 
243
-
244
-
245
  ## References
246
  * [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
247
  * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py)
248
 
249
-
250
-
251
  ## Community
252
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
253
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
254
-
255
-
 
12
 
13
  ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/web-assets/model_demo.png)
14
 
15
+ # MobileNet-v3-Small: Optimized for Qualcomm Devices
 
 
16
 
17
  MobileNetV3Small is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
18
 
19
+ This is based on the implementation of MobileNet-v3-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
20
+ This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/mobilenet_v3_small) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
21
+
22
+ Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
23
+
24
+ ## Getting Started
25
+ There are two ways to deploy this model on your device:
26
+
27
+ ### Option 1: Download Pre-Exported Models
28
+
29
+ Below are pre-exported model assets ready for deployment.
30
+
31
+ | Runtime | Precision | Chipset | SDK Versions | Download |
32
+ |---|---|---|---|---|
33
+ | ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/releases/v0.46.1/mobilenet_v3_small-onnx-float.zip)
34
+ | ONNX | w8a16 | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/releases/v0.46.1/mobilenet_v3_small-onnx-w8a16.zip)
35
+ | QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/releases/v0.46.1/mobilenet_v3_small-qnn_dlc-float.zip)
36
+ | QNN_DLC | w8a16 | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/releases/v0.46.1/mobilenet_v3_small-qnn_dlc-w8a16.zip)
37
+ | TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_small/releases/v0.46.1/mobilenet_v3_small-tflite-float.zip)
38
+
39
+ For more device-specific assets and performance metrics, visit **[MobileNet-v3-Small on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/mobilenet_v3_small)**.
40
+
41
+
42
+ ### Option 2: Export with Custom Configurations
43
+
44
+ Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/mobilenet_v3_small) Python library to compile and export the model with your own:
45
+ - Custom weights (e.g., fine-tuned checkpoints)
46
+ - Custom input shapes
47
+ - Target device and runtime configurations
48
+
49
+ This option is ideal if you need to customize the model beyond the default configuration provided here.
50
+
51
+ See our repository for [MobileNet-v3-Small on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/mobilenet_v3_small) for usage instructions.
52
+
53
+ ## Model Details
54
+
55
+ **Model Type:** Model_use_case.image_classification
56
+
57
+ **Model Stats:**
58
+ - Model checkpoint: Imagenet
59
+ - Input resolution: 224x224
60
+ - Number of parameters: 2.54M
61
+ - Model size (float): 9.71 MB
62
+
63
+ ## Performance Summary
64
+ | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
65
+ |---|---|---|---|---|---|---
66
+ | MobileNet-v3-Small | ONNX | float | Snapdragon® X Elite | 0.676 ms | 5 - 5 MB | NPU
67
+ | MobileNet-v3-Small | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 0.515 ms | 0 - 111 MB | NPU
68
+ | MobileNet-v3-Small | ONNX | float | Qualcomm® QCS8550 (Proxy) | 0.741 ms | 0 - 8 MB | NPU
69
+ | MobileNet-v3-Small | ONNX | float | Qualcomm® QCS9075 | 1.014 ms | 1 - 3 MB | NPU
70
+ | MobileNet-v3-Small | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 0.398 ms | 0 - 101 MB | NPU
71
+ | MobileNet-v3-Small | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 0.346 ms | 0 - 100 MB | NPU
72
+ | MobileNet-v3-Small | QNN_DLC | float | Snapdragon® X Elite | 0.977 ms | 1 - 1 MB | NPU
73
+ | MobileNet-v3-Small | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 0.556 ms | 0 - 46 MB | NPU
74
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 2.099 ms | 1 - 31 MB | NPU
75
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 0.828 ms | 1 - 2 MB | NPU
76
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® SA8775P | 1.102 ms | 1 - 32 MB | NPU
77
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® QCS9075 | 0.986 ms | 3 - 5 MB | NPU
78
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 1.601 ms | 0 - 47 MB | NPU
79
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® SA7255P | 2.099 ms | 1 - 31 MB | NPU
80
+ | MobileNet-v3-Small | QNN_DLC | float | Qualcomm® SA8295P | 1.464 ms | 0 - 29 MB | NPU
81
+ | MobileNet-v3-Small | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 0.405 ms | 1 - 31 MB | NPU
82
+ | MobileNet-v3-Small | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 0.325 ms | 1 - 34 MB | NPU
83
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Snapdragon® X Elite | 0.955 ms | 0 - 0 MB | NPU
84
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Snapdragon® 8 Gen 3 Mobile | 0.544 ms | 0 - 37 MB | NPU
85
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® QCS6490 | 2.28 ms | 0 - 2 MB | NPU
86
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® QCS8275 (Proxy) | 1.717 ms | 0 - 26 MB | NPU
87
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® QCS8550 (Proxy) | 0.798 ms | 0 - 15 MB | NPU
88
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® SA8775P | 0.994 ms | 0 - 27 MB | NPU
89
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® QCS9075 | 0.963 ms | 0 - 2 MB | NPU
90
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® QCM6690 | 2.799 ms | 0 - 140 MB | NPU
91
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® QCS8450 (Proxy) | 0.991 ms | 0 - 39 MB | NPU
92
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® SA7255P | 1.717 ms | 0 - 26 MB | NPU
93
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Qualcomm® SA8295P | 1.335 ms | 0 - 23 MB | NPU
94
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 0.375 ms | 0 - 24 MB | NPU
95
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Snapdragon® 7 Gen 4 Mobile | 0.804 ms | 0 - 25 MB | NPU
96
+ | MobileNet-v3-Small | QNN_DLC | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 0.313 ms | 0 - 28 MB | NPU
97
+ | MobileNet-v3-Small | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 0.557 ms | 0 - 46 MB | NPU
98
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 2.144 ms | 0 - 31 MB | NPU
99
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 0.84 ms | 0 - 2 MB | NPU
100
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® SA8775P | 1.145 ms | 0 - 34 MB | NPU
101
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® QCS9075 | 1.011 ms | 0 - 8 MB | NPU
102
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 1.614 ms | 0 - 48 MB | NPU
103
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® SA7255P | 2.144 ms | 0 - 31 MB | NPU
104
+ | MobileNet-v3-Small | TFLITE | float | Qualcomm® SA8295P | 1.485 ms | 0 - 30 MB | NPU
105
+ | MobileNet-v3-Small | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 0.435 ms | 0 - 36 MB | NPU
106
+ | MobileNet-v3-Small | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 0.331 ms | 0 - 36 MB | NPU
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
 
108
  ## License
109
  * The license for the original implementation of MobileNet-v3-Small can be found
110
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
111
 
 
 
112
  ## References
113
  * [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
114
  * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py)
115
 
 
 
116
  ## Community
117
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
118
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
 
tool-versions.yaml DELETED
@@ -1,4 +0,0 @@
1
- tool_versions:
2
- onnx:
3
- qairt: 2.37.1.250807093845_124904
4
- onnx_runtime: 1.23.0