--- library_name: pytorch license: other tags: - backbone - android pipeline_tag: image-classification --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/web-assets/model_demo.png) # Beit: Optimized for Qualcomm Devices Beit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. This is based on the implementation of Beit found [here](https://github.com/microsoft/unilm/tree/master/beit). This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/qai_hub_models/models/beit) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. ## Getting Started There are two ways to deploy this model on your device: ### Option 1: Download Pre-Exported Models Below are pre-exported model assets ready for deployment. | Runtime | Precision | Chipset | SDK Versions | Download | |---|---|---|---|---| | ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.49.1/beit-onnx-float.zip) | ONNX | w8a16 | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.49.1/beit-onnx-w8a16.zip) | QNN_DLC | float | Universal | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.49.1/beit-qnn_dlc-float.zip) | QNN_DLC | w8a16 | Universal | QAIRT 2.43 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.49.1/beit-qnn_dlc-w8a16.zip) | TFLITE | float | Universal | QAIRT 2.43, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/beit/releases/v0.49.1/beit-tflite-float.zip) For more device-specific assets and performance metrics, visit **[Beit on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/beit)**. ### Option 2: Export with Custom Configurations Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/qai_hub_models/models/beit) Python library to compile and export the model with your own: - Custom weights (e.g., fine-tuned checkpoints) - Custom input shapes - Target device and runtime configurations This option is ideal if you need to customize the model beyond the default configuration provided here. See our repository for [Beit on GitHub](https://github.com/qualcomm/ai-hub-models/blob/main/qai_hub_models/models/beit) for usage instructions. ## Model Details **Model Type:** Model_use_case.image_classification **Model Stats:** - Model checkpoint: Imagenet - Input resolution: 224x224 - Number of parameters: 92.0M - Model size (float): 351 MB ## Performance Summary | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |---|---|---|---|---|---|--- | Beit | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.237 ms | 1 - 486 MB | NPU | Beit | ONNX | float | Snapdragon® X2 Elite | 6.019 ms | 185 - 185 MB | NPU | Beit | ONNX | float | Snapdragon® X Elite | 13.686 ms | 185 - 185 MB | NPU | Beit | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 9.267 ms | 0 - 527 MB | NPU | Beit | ONNX | float | Qualcomm® QCS8550 (Proxy) | 12.928 ms | 0 - 195 MB | NPU | Beit | ONNX | float | Qualcomm® QCS9075 | 17.598 ms | 0 - 4 MB | NPU | Beit | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.65 ms | 1 - 493 MB | NPU | Beit | ONNX | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 4.182 ms | 0 - 407 MB | NPU | Beit | ONNX | w8a16 | Snapdragon® X2 Elite | 4.371 ms | 96 - 96 MB | NPU | Beit | ONNX | w8a16 | Snapdragon® X Elite | 12.511 ms | 96 - 96 MB | NPU | Beit | ONNX | w8a16 | Snapdragon® 8 Gen 3 Mobile | 7.927 ms | 0 - 492 MB | NPU | Beit | ONNX | w8a16 | Qualcomm® QCS6490 | 1060.589 ms | 53 - 70 MB | CPU | Beit | ONNX | w8a16 | Qualcomm® QCS8550 (Proxy) | 11.769 ms | 0 - 6 MB | NPU | Beit | ONNX | w8a16 | Qualcomm® QCS9075 | 14.665 ms | 0 - 3 MB | NPU | Beit | ONNX | w8a16 | Qualcomm® QCM6690 | 599.901 ms | 112 - 128 MB | CPU | Beit | ONNX | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 6.102 ms | 0 - 408 MB | NPU | Beit | ONNX | w8a16 | Snapdragon® 7 Gen 4 Mobile | 582.769 ms | 73 - 87 MB | CPU | Beit | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.571 ms | 1 - 481 MB | NPU | Beit | QNN_DLC | float | Snapdragon® X2 Elite | 6.925 ms | 1 - 1 MB | NPU | Beit | QNN_DLC | float | Snapdragon® X Elite | 13.429 ms | 1 - 1 MB | NPU | Beit | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 8.695 ms | 0 - 535 MB | NPU | Beit | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 44.932 ms | 1 - 485 MB | NPU | Beit | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 12.588 ms | 1 - 3 MB | NPU | Beit | QNN_DLC | float | Qualcomm® SA8775P | 16.39 ms | 1 - 485 MB | NPU | Beit | QNN_DLC | float | Qualcomm® QCS9075 | 16.723 ms | 1 - 3 MB | NPU | Beit | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 22.928 ms | 1 - 509 MB | NPU | Beit | QNN_DLC | float | Qualcomm® SA7255P | 44.932 ms | 1 - 485 MB | NPU | Beit | QNN_DLC | float | Qualcomm® SA8295P | 19.07 ms | 1 - 468 MB | NPU | Beit | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.965 ms | 1 - 480 MB | NPU | Beit | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.971 ms | 0 - 297 MB | NPU | Beit | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 6.668 ms | 0 - 343 MB | NPU | Beit | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 38.591 ms | 0 - 297 MB | NPU | Beit | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 9.334 ms | 0 - 3 MB | NPU | Beit | TFLITE | float | Qualcomm® SA8775P | 55.63 ms | 0 - 305 MB | NPU | Beit | TFLITE | float | Qualcomm® QCS9075 | 13.213 ms | 0 - 187 MB | NPU | Beit | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 19.184 ms | 0 - 430 MB | NPU | Beit | TFLITE | float | Qualcomm® SA7255P | 38.591 ms | 0 - 297 MB | NPU | Beit | TFLITE | float | Qualcomm® SA8295P | 16.061 ms | 0 - 405 MB | NPU | Beit | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.721 ms | 0 - 297 MB | NPU ## License * The license for the original implementation of Beit can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE). ## References * [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) * [Source Model Implementation](https://github.com/microsoft/unilm/tree/master/beit) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).