v0.46.1
Browse filesSee https://github.com/quic/ai-hub-models/releases/v0.46.1 for changelog.
- Beit_float.dlc +0 -3
- Beit_float.onnx.zip +0 -3
- Beit_float.tflite +0 -3
- README.md +74 -227
- tool-versions.yaml +0 -4
Beit_float.dlc
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:a4822b2907107927eaa6717e2f33d7bfeda1acda7ad5c87984deb9139506238d
|
| 3 |
-
size 368438420
|
|
|
|
|
|
|
|
|
|
|
|
Beit_float.onnx.zip
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:c36a0ea5d8571bc808b755c899b78921ddfd246ac41528f257ca3a5b4adb80ea
|
| 3 |
-
size 218717170
|
|
|
|
|
|
|
|
|
|
|
|
Beit_float.tflite
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:6cb77fa2b690e85ac5194efff636d0b8eb2c7530049054b6ecc8cd02295769e2
|
| 3 |
-
size 368085332
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -10,245 +10,92 @@ pipeline_tag: image-classification
|
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
-
# Beit: Optimized for
|
| 14 |
-
## Imagenet classifier and general purpose backbone
|
| 15 |
-
|
| 16 |
|
| 17 |
Beit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
| 18 |
|
| 19 |
-
This
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
## Demo off target
|
| 94 |
-
|
| 95 |
-
The package contains a simple end-to-end demo that downloads pre-trained
|
| 96 |
-
weights and runs this model on a sample input.
|
| 97 |
-
|
| 98 |
-
```bash
|
| 99 |
-
python -m qai_hub_models.models.beit.demo
|
| 100 |
-
```
|
| 101 |
-
|
| 102 |
-
The above demo runs a reference implementation of pre-processing, model
|
| 103 |
-
inference, and post processing.
|
| 104 |
-
|
| 105 |
-
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
| 106 |
-
environment, please add the following to your cell (instead of the above).
|
| 107 |
-
```
|
| 108 |
-
%run -m qai_hub_models.models.beit.demo
|
| 109 |
-
```
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
### Run model on a cloud-hosted device
|
| 113 |
-
|
| 114 |
-
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
|
| 115 |
-
device. This script does the following:
|
| 116 |
-
* Performance check on-device on a cloud-hosted device
|
| 117 |
-
* Downloads compiled assets that can be deployed on-device for Android.
|
| 118 |
-
* Accuracy check between PyTorch and on-device outputs.
|
| 119 |
-
|
| 120 |
-
```bash
|
| 121 |
-
python -m qai_hub_models.models.beit.export
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
## How does this work?
|
| 127 |
-
|
| 128 |
-
This [export script](https://aihub.qualcomm.com/models/beit/qai_hub_models/models/Beit/export.py)
|
| 129 |
-
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
| 130 |
-
on-device. Lets go through each step below in detail:
|
| 131 |
-
|
| 132 |
-
Step 1: **Compile model for on-device deployment**
|
| 133 |
-
|
| 134 |
-
To compile a PyTorch model for on-device deployment, we first trace the model
|
| 135 |
-
in memory using the `jit.trace` and then call the `submit_compile_job` API.
|
| 136 |
-
|
| 137 |
-
```python
|
| 138 |
-
import torch
|
| 139 |
-
|
| 140 |
-
import qai_hub as hub
|
| 141 |
-
from qai_hub_models.models.beit import Model
|
| 142 |
-
|
| 143 |
-
# Load the model
|
| 144 |
-
torch_model = Model.from_pretrained()
|
| 145 |
-
|
| 146 |
-
# Device
|
| 147 |
-
device = hub.Device("Samsung Galaxy S25")
|
| 148 |
-
|
| 149 |
-
# Trace model
|
| 150 |
-
input_shape = torch_model.get_input_spec()
|
| 151 |
-
sample_inputs = torch_model.sample_inputs()
|
| 152 |
-
|
| 153 |
-
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
|
| 154 |
-
|
| 155 |
-
# Compile model on a specific device
|
| 156 |
-
compile_job = hub.submit_compile_job(
|
| 157 |
-
model=pt_model,
|
| 158 |
-
device=device,
|
| 159 |
-
input_specs=torch_model.get_input_spec(),
|
| 160 |
-
)
|
| 161 |
-
|
| 162 |
-
# Get target model to run on-device
|
| 163 |
-
target_model = compile_job.get_target_model()
|
| 164 |
-
|
| 165 |
-
```
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
Step 2: **Performance profiling on cloud-hosted device**
|
| 169 |
-
|
| 170 |
-
After compiling models from step 1. Models can be profiled model on-device using the
|
| 171 |
-
`target_model`. Note that this scripts runs the model on a device automatically
|
| 172 |
-
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 173 |
-
provided job URL to view a variety of on-device performance metrics.
|
| 174 |
-
```python
|
| 175 |
-
profile_job = hub.submit_profile_job(
|
| 176 |
-
model=target_model,
|
| 177 |
-
device=device,
|
| 178 |
-
)
|
| 179 |
-
|
| 180 |
-
```
|
| 181 |
-
|
| 182 |
-
Step 3: **Verify on-device accuracy**
|
| 183 |
-
|
| 184 |
-
To verify the accuracy of the model on-device, you can run on-device inference
|
| 185 |
-
on sample input data on the same cloud hosted device.
|
| 186 |
-
```python
|
| 187 |
-
input_data = torch_model.sample_inputs()
|
| 188 |
-
inference_job = hub.submit_inference_job(
|
| 189 |
-
model=target_model,
|
| 190 |
-
device=device,
|
| 191 |
-
inputs=input_data,
|
| 192 |
-
)
|
| 193 |
-
on_device_output = inference_job.download_output_data()
|
| 194 |
-
|
| 195 |
-
```
|
| 196 |
-
With the output of the model, you can compute like PSNR, relative errors or
|
| 197 |
-
spot check the output with expected output.
|
| 198 |
-
|
| 199 |
-
**Note**: This on-device profiling and inference requires access to Qualcomm®
|
| 200 |
-
AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
## Run demo on a cloud-hosted device
|
| 205 |
-
|
| 206 |
-
You can also run the demo on-device.
|
| 207 |
-
|
| 208 |
-
```bash
|
| 209 |
-
python -m qai_hub_models.models.beit.demo --eval-mode on-device
|
| 210 |
-
```
|
| 211 |
-
|
| 212 |
-
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
| 213 |
-
environment, please add the following to your cell (instead of the above).
|
| 214 |
-
```
|
| 215 |
-
%run -m qai_hub_models.models.beit.demo -- --eval-mode on-device
|
| 216 |
-
```
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
## Deploying compiled model to Android
|
| 220 |
-
|
| 221 |
-
|
| 222 |
-
The models can be deployed using multiple runtimes:
|
| 223 |
-
- TensorFlow Lite (`.tflite` export): [This
|
| 224 |
-
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
|
| 225 |
-
guide to deploy the .tflite model in an Android application.
|
| 226 |
-
|
| 227 |
-
|
| 228 |
-
- QNN (`.so` export ): This [sample
|
| 229 |
-
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
|
| 230 |
-
provides instructions on how to use the `.so` shared library in an Android application.
|
| 231 |
-
|
| 232 |
-
|
| 233 |
-
## View on Qualcomm® AI Hub
|
| 234 |
-
Get more details on Beit's performance across various devices [here](https://aihub.qualcomm.com/models/beit).
|
| 235 |
-
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
| 236 |
-
|
| 237 |
|
| 238 |
## License
|
| 239 |
* The license for the original implementation of Beit can be found
|
| 240 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
| 241 |
|
| 242 |
-
|
| 243 |
-
|
| 244 |
## References
|
| 245 |
* [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254)
|
| 246 |
* [Source Model Implementation](https://github.com/microsoft/unilm/tree/master/beit)
|
| 247 |
|
| 248 |
-
|
| 249 |
-
|
| 250 |
## Community
|
| 251 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 252 |
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
| 253 |
-
|
| 254 |
-
|
|
|
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
+
# Beit: Optimized for Qualcomm Devices
|
|
|
|
|
|
|
| 14 |
|
| 15 |
Beit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
| 16 |
|
| 17 |
+
This is based on the implementation of Beit found [here](https://github.com/microsoft/unilm/tree/master/beit).
|
| 18 |
+
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/beit) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
|
| 19 |
+
|
| 20 |
+
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
|
| 21 |
+
|
| 22 |
+
## Getting Started
|
| 23 |
+
There are two ways to deploy this model on your device:
|
| 24 |
+
|
| 25 |
+
### Option 1: Download Pre-Exported Models
|
| 26 |
+
|
| 27 |
+
Below are pre-exported model assets ready for deployment.
|
| 28 |
+
|
| 29 |
+
| Runtime | Precision | Chipset | SDK Versions | Download |
|
| 30 |
+
|---|---|---|---|---|
|
| 31 |
+
| ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.1/beit-onnx-float.zip)
|
| 32 |
+
| QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.1/beit-qnn_dlc-float.zip)
|
| 33 |
+
| QNN_DLC | w8a16 | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.1/beit-qnn_dlc-w8a16.zip)
|
| 34 |
+
| TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.1/beit-tflite-float.zip)
|
| 35 |
+
|
| 36 |
+
For more device-specific assets and performance metrics, visit **[Beit on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/beit)**.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
### Option 2: Export with Custom Configurations
|
| 40 |
+
|
| 41 |
+
Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/beit) Python library to compile and export the model with your own:
|
| 42 |
+
- Custom weights (e.g., fine-tuned checkpoints)
|
| 43 |
+
- Custom input shapes
|
| 44 |
+
- Target device and runtime configurations
|
| 45 |
+
|
| 46 |
+
This option is ideal if you need to customize the model beyond the default configuration provided here.
|
| 47 |
+
|
| 48 |
+
See our repository for [Beit on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/beit) for usage instructions.
|
| 49 |
+
|
| 50 |
+
## Model Details
|
| 51 |
+
|
| 52 |
+
**Model Type:** Model_use_case.image_classification
|
| 53 |
+
|
| 54 |
+
**Model Stats:**
|
| 55 |
+
- Model checkpoint: Imagenet
|
| 56 |
+
- Input resolution: 224x224
|
| 57 |
+
- Number of parameters: 92.0M
|
| 58 |
+
- Model size (float): 351 MB
|
| 59 |
+
|
| 60 |
+
## Performance Summary
|
| 61 |
+
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|
| 62 |
+
|---|---|---|---|---|---|---
|
| 63 |
+
| Beit | ONNX | float | Snapdragon® X Elite | 14.768 ms | 186 - 186 MB | NPU
|
| 64 |
+
| Beit | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 9.876 ms | 0 - 524 MB | NPU
|
| 65 |
+
| Beit | ONNX | float | Qualcomm® QCS8550 (Proxy) | 13.457 ms | 0 - 194 MB | NPU
|
| 66 |
+
| Beit | ONNX | float | Qualcomm® QCS9075 | 20.562 ms | 0 - 4 MB | NPU
|
| 67 |
+
| Beit | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 7.167 ms | 1 - 447 MB | NPU
|
| 68 |
+
| Beit | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 5.932 ms | 0 - 436 MB | NPU
|
| 69 |
+
| Beit | QNN_DLC | float | Snapdragon® X Elite | 13.534 ms | 1 - 1 MB | NPU
|
| 70 |
+
| Beit | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 8.535 ms | 0 - 535 MB | NPU
|
| 71 |
+
| Beit | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 44.873 ms | 1 - 485 MB | NPU
|
| 72 |
+
| Beit | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 12.732 ms | 1 - 2 MB | NPU
|
| 73 |
+
| Beit | QNN_DLC | float | Qualcomm® SA8775P | 15.563 ms | 1 - 485 MB | NPU
|
| 74 |
+
| Beit | QNN_DLC | float | Qualcomm® QCS9075 | 16.84 ms | 1 - 3 MB | NPU
|
| 75 |
+
| Beit | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 22.993 ms | 0 - 507 MB | NPU
|
| 76 |
+
| Beit | QNN_DLC | float | Qualcomm® SA7255P | 44.873 ms | 1 - 485 MB | NPU
|
| 77 |
+
| Beit | QNN_DLC | float | Qualcomm® SA8295P | 19.001 ms | 1 - 468 MB | NPU
|
| 78 |
+
| Beit | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 7.003 ms | 1 - 478 MB | NPU
|
| 79 |
+
| Beit | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.475 ms | 1 - 481 MB | NPU
|
| 80 |
+
| Beit | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 6.665 ms | 0 - 350 MB | NPU
|
| 81 |
+
| Beit | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 38.644 ms | 0 - 302 MB | NPU
|
| 82 |
+
| Beit | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 9.671 ms | 0 - 3 MB | NPU
|
| 83 |
+
| Beit | TFLITE | float | Qualcomm® SA8775P | 12.131 ms | 0 - 310 MB | NPU
|
| 84 |
+
| Beit | TFLITE | float | Qualcomm® QCS9075 | 13.331 ms | 0 - 187 MB | NPU
|
| 85 |
+
| Beit | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 19.271 ms | 0 - 433 MB | NPU
|
| 86 |
+
| Beit | TFLITE | float | Qualcomm® SA7255P | 38.644 ms | 0 - 302 MB | NPU
|
| 87 |
+
| Beit | TFLITE | float | Qualcomm® SA8295P | 16.047 ms | 0 - 410 MB | NPU
|
| 88 |
+
| Beit | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.824 ms | 0 - 302 MB | NPU
|
| 89 |
+
| Beit | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 4.065 ms | 0 - 304 MB | NPU
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
## License
|
| 92 |
* The license for the original implementation of Beit can be found
|
| 93 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
| 94 |
|
|
|
|
|
|
|
| 95 |
## References
|
| 96 |
* [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254)
|
| 97 |
* [Source Model Implementation](https://github.com/microsoft/unilm/tree/master/beit)
|
| 98 |
|
|
|
|
|
|
|
| 99 |
## Community
|
| 100 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 101 |
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
|
|
|
|
|
tool-versions.yaml
DELETED
|
@@ -1,4 +0,0 @@
|
|
| 1 |
-
tool_versions:
|
| 2 |
-
onnx:
|
| 3 |
-
qairt: 2.37.1.250807093845_124904
|
| 4 |
-
onnx_runtime: 1.23.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|