Instructions to use GD-ML/Omni-Effects with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use GD-ML/Omni-Effects with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("GD-ML/Omni-Effects", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Add pipeline tag, library name, and link to code
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,13 +1,16 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
# *Omni-Effects*: Unified and Spatially-Controllable Visual Effects Generation
|
| 5 |
|
|
|
|
| 6 |
|
| 7 |
[](https://arxiv.org/abs/2508.07981)
|
| 8 |
[](https://amap-ml.github.io/Omni-Effects.github.io/)
|
| 9 |
[](https://huggingface.co/datasets/GD-ML/Omni-VFX)
|
| 10 |
[](https://huggingface.co/GD-ML/Omni-Effects)
|
|
|
|
| 11 |
|
| 12 |
# 🔥 Updates
|
| 13 |
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: image-to-video
|
| 4 |
+
library_name: diffusers
|
| 5 |
---
|
|
|
|
| 6 |
|
| 7 |
+
# *Omni-Effects*: Unified and Spatially-Controllable Visual Effects Generation
|
| 8 |
|
| 9 |
[](https://arxiv.org/abs/2508.07981)
|
| 10 |
[](https://amap-ml.github.io/Omni-Effects.github.io/)
|
| 11 |
[](https://huggingface.co/datasets/GD-ML/Omni-VFX)
|
| 12 |
[](https://huggingface.co/GD-ML/Omni-Effects)
|
| 13 |
+
[](https://github.com/AMAP-ML/Omni-Effects)
|
| 14 |
|
| 15 |
# 🔥 Updates
|
| 16 |
|