Instructions to use Remade-AI/Rotate with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Remade-AI/Rotate with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-I2V-14B-480P,Wan-AI/Wan2.1-I2V-14B-480P-Diffusers", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("Remade-AI/Rotate") prompt = "The video shows a man seated on a chair. The man and the chair performs a r0t4tion 360 degrees rotation." input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png") image = pipe(image=input_image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Update README.md
Browse files
README.md
CHANGED
|
@@ -107,7 +107,7 @@ widget:
|
|
| 107 |
<ul style="margin-bottom: 0;">
|
| 108 |
<li><b>Base Model:</b> Wan2.1 14B I2V 480p</li>
|
| 109 |
<li><b>Training Data:</b> Trained on 30 seconds of video comprised of 12 short clips (each clip captioned separately) of things being rotated</li>
|
| 110 |
-
<li><b> Epochs:</b>20</li>
|
| 111 |
</ul>
|
| 112 |
</div>
|
| 113 |
|
|
|
|
| 107 |
<ul style="margin-bottom: 0;">
|
| 108 |
<li><b>Base Model:</b> Wan2.1 14B I2V 480p</li>
|
| 109 |
<li><b>Training Data:</b> Trained on 30 seconds of video comprised of 12 short clips (each clip captioned separately) of things being rotated</li>
|
| 110 |
+
<li><b> Epochs:</b> 20</li>
|
| 111 |
</ul>
|
| 112 |
</div>
|
| 113 |
|