library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: image-classification
OpenAI-Clip: Optimized for Mobile Deployment
Multi-modal foundational model for vision and language tasks like image/text similarity and for zero-shot image classification
Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.
This model is an implementation of OpenAI-Clip found here.
This repository provides scripts to run OpenAI-Clip on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Model_use_case.image_classification
- Model Stats:
- Model checkpoint: ViT-B/16
- Image input resolution: 224x224
- Text context length: 77
- Number of parameters (CLIPTextEncoder): 76.0M
- Model size (CLIPTextEncoder): 290 MB
- Number of parameters (CLIPImageEncoder): 115M
- Model size (CLIPImageEncoder): 437 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |
|---|---|---|---|---|---|---|---|---|
| OpenAI-Clip | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 367.248 ms | 0 - 406 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 305.317 ms | 1 - 10 MB | NPU | Use Export Script |
| OpenAI-Clip | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 26.29 ms | 0 - 364 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 22.919 ms | 1 - 455 MB | NPU | Use Export Script |
| OpenAI-Clip | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 23.813 ms | 0 - 48 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 19.969 ms | 1 - 3 MB | NPU | Use Export Script |
| OpenAI-Clip | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 28.479 ms | 0 - 407 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 22.717 ms | 1 - 10 MB | NPU | Use Export Script |
| OpenAI-Clip | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 367.248 ms | 0 - 406 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 305.317 ms | 1 - 10 MB | NPU | Use Export Script |
| OpenAI-Clip | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 23.865 ms | 0 - 43 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 19.894 ms | 1 - 3 MB | NPU | Use Export Script |
| OpenAI-Clip | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 29.633 ms | 0 - 349 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 23.848 ms | 1 - 15 MB | NPU | Use Export Script |
| OpenAI-Clip | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 23.649 ms | 0 - 63 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 19.842 ms | 1 - 8 MB | NPU | Use Export Script |
| OpenAI-Clip | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 28.479 ms | 0 - 407 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 22.717 ms | 1 - 10 MB | NPU | Use Export Script |
| OpenAI-Clip | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 23.99 ms | 0 - 53 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 19.815 ms | 0 - 47 MB | NPU | Use Export Script |
| OpenAI-Clip | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 24.691 ms | 0 - 54 MB | NPU | OpenAI-Clip.onnx |
| OpenAI-Clip | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 16.648 ms | 0 - 414 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 14.178 ms | 1 - 491 MB | NPU | Use Export Script |
| OpenAI-Clip | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 17.654 ms | 0 - 532 MB | NPU | OpenAI-Clip.onnx |
| OpenAI-Clip | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 15.935 ms | 0 - 406 MB | NPU | OpenAI-Clip.tflite |
| OpenAI-Clip | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 13.223 ms | 1 - 470 MB | NPU | Use Export Script |
| OpenAI-Clip | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 16.872 ms | 1 - 512 MB | NPU | OpenAI-Clip.onnx |
| OpenAI-Clip | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 20.798 ms | 1 - 1 MB | NPU | Use Export Script |
| OpenAI-Clip | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 25.8 ms | 293 - 293 MB | NPU | OpenAI-Clip.onnx |
Installation
Install the package via pip:
pip install "qai-hub-models[openai-clip]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.openai_clip.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.openai_clip.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.openai_clip.export
Profiling Results
------------------------------------------------------------
OpenAI-Clip
Device : cs_8275 (ANDROID 14)
Runtime : TFLITE
Estimated inference time (ms) : 367.2
Estimated peak memory usage (MB): [0, 406]
Total # Ops : 1320
Compute Unit(s) : npu (1318 ops) gpu (0 ops) cpu (2 ops)
How does this work?
This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:
Step 1: Compile model for on-device deployment
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the jit.trace and then call the submit_compile_job API.
import torch
import qai_hub as hub
from qai_hub_models.models.openai_clip import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
Step 2: Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
Step 3: Verify on-device accuracy
To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.
Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tfliteexport): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.soexport ): This sample app provides instructions on how to use the.soshared library in an Android application.
View on Qualcomm® AI Hub
Get more details on OpenAI-Clip's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of OpenAI-Clip can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
