| | --- |
| | library_name: pytorch |
| | license: other |
| | tags: |
| | - android |
| | pipeline_tag: image-to-video |
| |
|
| | --- |
| | |
| |  |
| |
|
| | # First-Order-Motion-Model: Optimized for Mobile Deployment |
| | ## Animation of Still Image from Source Video |
| |
|
| |
|
| | FOMM is a machine learning model that animates a still image to mirror the movements from a target video. |
| |
|
| | This model is an implementation of First-Order-Motion-Model found [here](https://github.com/AliaksandrSiarohin/first-order-model/tree/master). |
| |
|
| |
|
| | This repository provides scripts to run First-Order-Motion-Model on Qualcomm® devices. |
| | More details on model performance across various devices, can be found |
| | [here](https://aihub.qualcomm.com/models/fomm). |
| |
|
| |
|
| |
|
| | ### Model Details |
| |
|
| | - **Model Type:** Model_use_case.video_generation |
| | - **Model Stats:** |
| | - Model checkpoint: vox-256 |
| | - Input resolution: 256x256 |
| | - Model size (FOMMDetector) (float): 54.2 MB |
| | - Model size (FOMMGenerator) (float): 174 MB |
| | |
| | | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |
| | |---|---|---|---|---|---|---|---|---| |
| | | FOMMDetector | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 4.629 ms | 0 - 29 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMDetector | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 3.427 ms | 0 - 105 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMDetector | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 3.031 ms | 0 - 93 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMDetector | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | ONNX | 2.908 ms | 1 - 92 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMDetector | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 4.684 ms | 28 - 28 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMGenerator | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 22.855 ms | 18 - 21 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMGenerator | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 16.644 ms | 17 - 191 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMGenerator | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 13.498 ms | 16 - 154 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMGenerator | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | ONNX | 11.017 ms | 18 - 165 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | | FOMMGenerator | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 22.939 ms | 88 - 88 MB | NPU | [First-Order-Motion-Model.onnx.zip](https://huggingface.co/qualcomm/First-Order-Motion-Model/blob/main/First-Order-Motion-Model.onnx.zip) | |
| | |
| | |
| | |
| | |
| | ## Installation |
| | |
| | |
| | Install the package via pip: |
| | ```bash |
| | # NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported. |
| | pip install "qai-hub-models[fomm]" |
| | ``` |
| | |
| | |
| | ## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device |
| | |
| | Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your |
| | Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. |
| | |
| | With this API token, you can configure your client to run models on the cloud |
| | hosted devices. |
| | ```bash |
| | qai-hub configure --api_token API_TOKEN |
| | ``` |
| | Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information. |
| | |
| | |
| | |
| | ## Demo off target |
| | |
| | The package contains a simple end-to-end demo that downloads pre-trained |
| | weights and runs this model on a sample input. |
| | |
| | ```bash |
| | python -m qai_hub_models.models.fomm.demo |
| | ``` |
| | |
| | The above demo runs a reference implementation of pre-processing, model |
| | inference, and post processing. |
| | |
| | **NOTE**: If you want running in a Jupyter Notebook or Google Colab like |
| | environment, please add the following to your cell (instead of the above). |
| | ``` |
| | %run -m qai_hub_models.models.fomm.demo |
| | ``` |
| | |
| | |
| | ### Run model on a cloud-hosted device |
| | |
| | In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® |
| | device. This script does the following: |
| | * Performance check on-device on a cloud-hosted device |
| | * Downloads compiled assets that can be deployed on-device for Android. |
| | * Accuracy check between PyTorch and on-device outputs. |
| | |
| | ```bash |
| | python -m qai_hub_models.models.fomm.export |
| | ``` |
| | |
| | |
| | |
| | ## How does this work? |
| | |
| | This [export script](https://aihub.qualcomm.com/models/fomm/qai_hub_models/models/First-Order-Motion-Model/export.py) |
| | leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model |
| | on-device. Lets go through each step below in detail: |
| | |
| | Step 1: **Compile model for on-device deployment** |
| | |
| | To compile a PyTorch model for on-device deployment, we first trace the model |
| | in memory using the `jit.trace` and then call the `submit_compile_job` API. |
| | |
| | ```python |
| | import torch |
| |
|
| | import qai_hub as hub |
| | from qai_hub_models.models.fomm import Model |
| | |
| | # Load the model |
| | torch_model = Model.from_pretrained() |
| | |
| | # Device |
| | device = hub.Device("Samsung Galaxy S25") |
| | |
| | # Trace model |
| | input_shape = torch_model.get_input_spec() |
| | sample_inputs = torch_model.sample_inputs() |
| |
|
| | pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) |
| |
|
| | # Compile model on a specific device |
| | compile_job = hub.submit_compile_job( |
| | model=pt_model, |
| | device=device, |
| | input_specs=torch_model.get_input_spec(), |
| | ) |
| | |
| | # Get target model to run on-device |
| | target_model = compile_job.get_target_model() |
| |
|
| | ``` |
| | |
| | |
| | Step 2: **Performance profiling on cloud-hosted device** |
| | |
| | After compiling models from step 1. Models can be profiled model on-device using the |
| | `target_model`. Note that this scripts runs the model on a device automatically |
| | provisioned in the cloud. Once the job is submitted, you can navigate to a |
| | provided job URL to view a variety of on-device performance metrics. |
| | ```python |
| | profile_job = hub.submit_profile_job( |
| | model=target_model, |
| | device=device, |
| | ) |
| | |
| | ``` |
| | |
| | Step 3: **Verify on-device accuracy** |
| |
|
| | To verify the accuracy of the model on-device, you can run on-device inference |
| | on sample input data on the same cloud hosted device. |
| | ```python |
| | input_data = torch_model.sample_inputs() |
| | inference_job = hub.submit_inference_job( |
| | model=target_model, |
| | device=device, |
| | inputs=input_data, |
| | ) |
| | on_device_output = inference_job.download_output_data() |
| | |
| | ``` |
| | With the output of the model, you can compute like PSNR, relative errors or |
| | spot check the output with expected output. |
| |
|
| | **Note**: This on-device profiling and inference requires access to Qualcomm® |
| | AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup). |
| |
|
| |
|
| |
|
| |
|
| | ## Deploying compiled model to Android |
| |
|
| |
|
| | The models can be deployed using multiple runtimes: |
| | - TensorFlow Lite (`.tflite` export): [This |
| | tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a |
| | guide to deploy the .tflite model in an Android application. |
| |
|
| |
|
| | - QNN (`.so` export ): This [sample |
| | app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) |
| | provides instructions on how to use the `.so` shared library in an Android application. |
| | |
| | |
| | ## View on Qualcomm® AI Hub |
| | Get more details on First-Order-Motion-Model's performance across various devices [here](https://aihub.qualcomm.com/models/fomm). |
| | Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) |
| | |
| | |
| | ## License |
| | * The license for the original implementation of First-Order-Motion-Model can be found |
| | [here](https://github.com/AliaksandrSiarohin/first-order-model/blob/master/LICENSE.md). |
| | |
| | |
| | |
| | ## References |
| | * [First Order Motion Model for Image Animation](https://arxiv.org/abs/2003.00196) |
| | * [Source Model Implementation](https://github.com/AliaksandrSiarohin/first-order-model/tree/master) |
| | |
| | |
| | |
| | ## Community |
| | * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. |
| | * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). |
| | |
| | |
| | |