|
|
--- |
|
|
library_name: pytorch |
|
|
license: other |
|
|
tags: |
|
|
- backbone |
|
|
- android |
|
|
pipeline_tag: image-classification |
|
|
|
|
|
--- |
|
|
|
|
|
 |
|
|
|
|
|
# Beit: Optimized for Qualcomm Devices |
|
|
|
|
|
Beit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. |
|
|
|
|
|
This is based on the implementation of Beit found [here](https://github.com/microsoft/unilm/tree/master/beit). |
|
|
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/beit) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). |
|
|
|
|
|
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. |
|
|
|
|
|
## Getting Started |
|
|
There are two ways to deploy this model on your device: |
|
|
|
|
|
### Option 1: Download Pre-Exported Models |
|
|
|
|
|
Below are pre-exported model assets ready for deployment. |
|
|
|
|
|
| Runtime | Precision | Chipset | SDK Versions | Download | |
|
|
|---|---|---|---|---| |
|
|
| ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.0/beit-onnx-float.zip) |
|
|
| QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.0/beit-qnn_dlc-float.zip) |
|
|
| QNN_DLC | w8a16 | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.0/beit-qnn_dlc-w8a16.zip) |
|
|
| TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.46.0/beit-tflite-float.zip) |
|
|
|
|
|
For more device-specific assets and performance metrics, visit **[Beit on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/beit)**. |
|
|
|
|
|
|
|
|
### Option 2: Export with Custom Configurations |
|
|
|
|
|
Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/beit) Python library to compile and export the model with your own: |
|
|
- Custom weights (e.g., fine-tuned checkpoints) |
|
|
- Custom input shapes |
|
|
- Target device and runtime configurations |
|
|
|
|
|
This option is ideal if you need to customize the model beyond the default configuration provided here. |
|
|
|
|
|
See our repository for [Beit on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/beit) for usage instructions. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
**Model Type:** Model_use_case.image_classification |
|
|
|
|
|
**Model Stats:** |
|
|
- Model checkpoint: Imagenet |
|
|
- Input resolution: 224x224 |
|
|
- Number of parameters: 92.0M |
|
|
- Model size (float): 351 MB |
|
|
|
|
|
## Performance Summary |
|
|
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |
|
|
|---|---|---|---|---|---|--- |
|
|
| Beit | ONNX | float | Snapdragon® X Elite | 14.768 ms | 186 - 186 MB | NPU |
|
|
| Beit | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 9.876 ms | 0 - 524 MB | NPU |
|
|
| Beit | ONNX | float | Qualcomm® QCS8550 (Proxy) | 13.457 ms | 0 - 194 MB | NPU |
|
|
| Beit | ONNX | float | Qualcomm® QCS9075 | 20.562 ms | 0 - 4 MB | NPU |
|
|
| Beit | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 7.167 ms | 1 - 447 MB | NPU |
|
|
| Beit | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 5.932 ms | 0 - 436 MB | NPU |
|
|
| Beit | QNN_DLC | float | Snapdragon® X Elite | 13.534 ms | 1 - 1 MB | NPU |
|
|
| Beit | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 8.535 ms | 0 - 535 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 44.873 ms | 1 - 485 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 12.732 ms | 1 - 2 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® SA8775P | 15.563 ms | 1 - 485 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® QCS9075 | 16.84 ms | 1 - 3 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 22.993 ms | 0 - 507 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® SA7255P | 44.873 ms | 1 - 485 MB | NPU |
|
|
| Beit | QNN_DLC | float | Qualcomm® SA8295P | 19.001 ms | 1 - 468 MB | NPU |
|
|
| Beit | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 7.003 ms | 1 - 478 MB | NPU |
|
|
| Beit | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.475 ms | 1 - 481 MB | NPU |
|
|
| Beit | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 6.665 ms | 0 - 350 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 38.644 ms | 0 - 302 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 9.671 ms | 0 - 3 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® SA8775P | 12.131 ms | 0 - 310 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® QCS9075 | 13.331 ms | 0 - 187 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 19.271 ms | 0 - 433 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® SA7255P | 38.644 ms | 0 - 302 MB | NPU |
|
|
| Beit | TFLITE | float | Qualcomm® SA8295P | 16.047 ms | 0 - 410 MB | NPU |
|
|
| Beit | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.824 ms | 0 - 302 MB | NPU |
|
|
| Beit | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 4.065 ms | 0 - 304 MB | NPU |
|
|
|
|
|
## License |
|
|
* The license for the original implementation of Beit can be found |
|
|
[here](https://github.com/pytorch/vision/blob/main/LICENSE). |
|
|
|
|
|
## References |
|
|
* [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) |
|
|
* [Source Model Implementation](https://github.com/microsoft/unilm/tree/master/beit) |
|
|
|
|
|
## Community |
|
|
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. |
|
|
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). |
|
|
|