Duplicate from qualcomm/First-Order-Motion-Model
Browse filesCo-authored-by: QAIHM Bot <qaihm-bot@users.noreply.huggingface.co>
- .gitattributes +36 -0
- LICENSE +1 -0
- README.md +82 -0
.gitattributes
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
DEPLOYMENT_MODEL_LICENSE.pdf filter=lfs diff=lfs merge=lfs -text
|
LICENSE
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
The license of the original trained model can be found at https://github.com/AliaksandrSiarohin/first-order-model/blob/master/LICENSE.md.
|
README.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: pytorch
|
| 3 |
+
license: other
|
| 4 |
+
tags:
|
| 5 |
+
- android
|
| 6 |
+
pipeline_tag: image-to-video
|
| 7 |
+
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+

|
| 11 |
+
|
| 12 |
+
# First-Order-Motion-Model: Optimized for Qualcomm Devices
|
| 13 |
+
|
| 14 |
+
FOMM is a machine learning model that animates a still image to mirror the movements from a target video.
|
| 15 |
+
|
| 16 |
+
This is based on the implementation of First-Order-Motion-Model found [here](https://github.com/AliaksandrSiarohin/first-order-model/tree/master).
|
| 17 |
+
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/tree/v0.49.1/qai_hub_models/models/fomm) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
|
| 18 |
+
|
| 19 |
+
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
|
| 20 |
+
|
| 21 |
+
## Getting Started
|
| 22 |
+
There are two ways to deploy this model on your device:
|
| 23 |
+
|
| 24 |
+
### Option 1: Download Pre-Exported Models
|
| 25 |
+
|
| 26 |
+
Below are pre-exported model assets ready for deployment.
|
| 27 |
+
|
| 28 |
+
| Runtime | Precision | Chipset | SDK Versions | Download |
|
| 29 |
+
|---|---|---|---|---|
|
| 30 |
+
| ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fomm/releases/v0.49.1/fomm-onnx-float.zip)
|
| 31 |
+
|
| 32 |
+
For more device-specific assets and performance metrics, visit **[First-Order-Motion-Model on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/fomm)**.
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
### Option 2: Export with Custom Configurations
|
| 36 |
+
|
| 37 |
+
Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/tree/v0.49.1/qai_hub_models/models/fomm) Python library to compile and export the model with your own:
|
| 38 |
+
- Custom weights (e.g., fine-tuned checkpoints)
|
| 39 |
+
- Custom input shapes
|
| 40 |
+
- Target device and runtime configurations
|
| 41 |
+
|
| 42 |
+
This option is ideal if you need to customize the model beyond the default configuration provided here.
|
| 43 |
+
|
| 44 |
+
See our repository for [First-Order-Motion-Model on GitHub](https://github.com/qualcomm/ai-hub-models/tree/v0.49.1/qai_hub_models/models/fomm) for usage instructions.
|
| 45 |
+
|
| 46 |
+
## Model Details
|
| 47 |
+
|
| 48 |
+
**Model Type:** Model_use_case.video_generation
|
| 49 |
+
|
| 50 |
+
**Model Stats:**
|
| 51 |
+
- Model checkpoint: vox-256
|
| 52 |
+
- Input resolution: 256x256
|
| 53 |
+
- Model size (FOMMDetector) (float): 54.2 MB
|
| 54 |
+
- Model size (FOMMGenerator) (float): 174 MB
|
| 55 |
+
|
| 56 |
+
## Performance Summary
|
| 57 |
+
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|
| 58 |
+
|---|---|---|---|---|---|---
|
| 59 |
+
| FOMMDetector | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 2.75 ms | 0 - 24 MB | NPU
|
| 60 |
+
| FOMMDetector | ONNX | float | Snapdragon® X2 Elite | 2.777 ms | 28 - 28 MB | NPU
|
| 61 |
+
| FOMMDetector | ONNX | float | Snapdragon® X Elite | 4.579 ms | 27 - 27 MB | NPU
|
| 62 |
+
| FOMMDetector | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 3.275 ms | 0 - 32 MB | NPU
|
| 63 |
+
| FOMMDetector | ONNX | float | Qualcomm® QCS9075 | 5.8 ms | 1 - 4 MB | NPU
|
| 64 |
+
| FOMMDetector | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 2.925 ms | 0 - 21 MB | NPU
|
| 65 |
+
| FOMMGenerator | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 10.76 ms | 0 - 196 MB | NPU
|
| 66 |
+
| FOMMGenerator | ONNX | float | Snapdragon® X2 Elite | 12.262 ms | 91 - 91 MB | NPU
|
| 67 |
+
| FOMMGenerator | ONNX | float | Snapdragon® X Elite | 29.148 ms | 89 - 89 MB | NPU
|
| 68 |
+
| FOMMGenerator | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 17.31 ms | 3 - 222 MB | NPU
|
| 69 |
+
| FOMMGenerator | ONNX | float | Qualcomm® QCS9075 | 34.708 ms | 18 - 22 MB | NPU
|
| 70 |
+
| FOMMGenerator | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 13.265 ms | 8 - 196 MB | NPU
|
| 71 |
+
|
| 72 |
+
## License
|
| 73 |
+
* The license for the original implementation of First-Order-Motion-Model can be found
|
| 74 |
+
[here](https://github.com/AliaksandrSiarohin/first-order-model/blob/master/LICENSE.md).
|
| 75 |
+
|
| 76 |
+
## References
|
| 77 |
+
* [First Order Motion Model for Image Animation](https://arxiv.org/abs/2003.00196)
|
| 78 |
+
* [Source Model Implementation](https://github.com/AliaksandrSiarohin/first-order-model/tree/master)
|
| 79 |
+
|
| 80 |
+
## Community
|
| 81 |
+
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 82 |
+
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|