LeViT: Optimized for Mobile Deployment

Imagenet classifier and general purpose backbone

LeViT is a vision transformer model that can classify images from the Imagenet dataset.

This model is an implementation of LeViT found here.

This repository provides scripts to run LeViT on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Model_use_case.image_classification
  • Model Stats:
    • Model checkpoint: LeViT-128S
    • Input resolution: 224x224
    • Number of parameters: 7.82M
    • Model size (float): 29.9 MB
    • Model size (w8a16): 8.83 MB
Model Precision Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit Target Model
LeViT float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) TFLITE 3.915 ms 0 - 150 MB NPU LeViT.tflite
LeViT float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 3.743 ms 1 - 147 MB NPU LeViT.dlc
LeViT float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) TFLITE 2.187 ms 0 - 177 MB NPU LeViT.tflite
LeViT float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_DLC 2.174 ms 1 - 178 MB NPU LeViT.dlc
LeViT float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) TFLITE 1.446 ms 0 - 3 MB NPU LeViT.tflite
LeViT float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 1.479 ms 1 - 3 MB NPU LeViT.dlc
LeViT float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) ONNX 1.498 ms 0 - 22 MB NPU LeViT.onnx.zip
LeViT float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) TFLITE 1.945 ms 0 - 150 MB NPU LeViT.tflite
LeViT float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 1.931 ms 0 - 148 MB NPU LeViT.dlc
LeViT float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile TFLITE 0.987 ms 0 - 184 MB NPU LeViT.tflite
LeViT float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 1.032 ms 1 - 178 MB NPU LeViT.dlc
LeViT float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile ONNX 0.995 ms 0 - 153 MB NPU LeViT.onnx.zip
LeViT float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile TFLITE 0.773 ms 0 - 150 MB NPU LeViT.tflite
LeViT float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile QNN_DLC 0.817 ms 0 - 152 MB NPU LeViT.dlc
LeViT float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile ONNX 0.847 ms 0 - 122 MB NPU LeViT.onnx.zip
LeViT float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen5 Mobile TFLITE 0.678 ms 0 - 155 MB NPU LeViT.tflite
LeViT float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen5 Mobile QNN_DLC 0.741 ms 1 - 151 MB NPU LeViT.dlc
LeViT float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen5 Mobile ONNX 0.813 ms 1 - 124 MB NPU LeViT.onnx.zip
LeViT float Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 1.711 ms 1 - 1 MB NPU LeViT.dlc
LeViT float Snapdragon X Elite CRD Snapdragon® X Elite ONNX 1.459 ms 16 - 16 MB NPU LeViT.onnx.zip
LeViT w8a16 Dragonwing Q-6690 MTP Qualcomm® Qcm6690 QNN_DLC 5.802 ms 0 - 141 MB NPU LeViT.dlc
LeViT w8a16 QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 2.972 ms 0 - 131 MB NPU LeViT.dlc
LeViT w8a16 QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 1.419 ms 0 - 3 MB NPU LeViT.dlc
LeViT w8a16 QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 7.214 ms 0 - 131 MB NPU LeViT.dlc
LeViT w8a16 Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 0.996 ms 0 - 160 MB NPU LeViT.dlc
LeViT w8a16 Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile QNN_DLC 0.736 ms 0 - 131 MB NPU LeViT.dlc
LeViT w8a16 Snapdragon 7 Gen 4 QRD Snapdragon® 7 Gen 4 Mobile QNN_DLC 1.48 ms 0 - 136 MB NPU LeViT.dlc
LeViT w8a16 Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen5 Mobile QNN_DLC 0.632 ms 0 - 133 MB NPU LeViT.dlc
LeViT w8a16 Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 1.63 ms 0 - 0 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 Dragonwing Q-6690 MTP Qualcomm® Qcm6690 QNN_DLC 6.094 ms 0 - 142 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 3.039 ms 0 - 131 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 1.46 ms 0 - 3 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 1.741 ms 0 - 131 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 1.015 ms 0 - 161 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile QNN_DLC 0.751 ms 0 - 134 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 Snapdragon 7 Gen 4 QRD Snapdragon® 7 Gen 4 Mobile QNN_DLC 1.51 ms 0 - 135 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen5 Mobile QNN_DLC 0.648 ms 0 - 132 MB NPU LeViT.dlc
LeViT w8a16_mixed_int16 Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 1.669 ms 0 - 0 MB NPU LeViT.dlc

Installation

Install the package via pip:

# NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
pip install "qai-hub-models[levit]"

Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub Workbench with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.levit.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.levit.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.levit.export

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.levit import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S25")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub Workbench. Sign up for access.

Run demo on a cloud-hosted device

You can also run the demo on-device.

python -m qai_hub_models.models.levit.demo --eval-mode on-device

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.levit.demo -- --eval-mode on-device

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on LeViT's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of LeViT can be found here.

References

Community

Downloads last month
41
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/LeViT