Instructions to use Remade-AI/Zoom-Call with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Remade-AI/Zoom-Call with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-14B", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("Remade-AI/Zoom-Call") prompt = "The video shows a [z00m_ca11] with four participants. In the top left box, a medieval knight in full armor adjusts his helmet. To his right, a pirate with a parrot on his shoulder drinks from a mug. In the bottom left, a scientist in a lab coat scribbles on a whiteboard. In the bottom right, an alien in a suit waves awkwardly." output = pipe(prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Update README.md
Browse files
README.md
CHANGED
|
@@ -105,7 +105,7 @@ widget:
|
|
| 105 |
<ul style="margin-bottom: 0;">
|
| 106 |
<li><b>Base Model:</b> Wan2.1 14B T2V</li>
|
| 107 |
<li><b>Training Data:</b> Trained on 2 minutes of video comprised of 28 short clips (each clip captioned separately) of various Zoom call recordings.</li>
|
| 108 |
-
<li><b> Epochs:</b>
|
| 109 |
</ul>
|
| 110 |
</div>
|
| 111 |
|
|
|
|
| 105 |
<ul style="margin-bottom: 0;">
|
| 106 |
<li><b>Base Model:</b> Wan2.1 14B T2V</li>
|
| 107 |
<li><b>Training Data:</b> Trained on 2 minutes of video comprised of 28 short clips (each clip captioned separately) of various Zoom call recordings.</li>
|
| 108 |
+
<li><b> Epochs:</b> 10</li>
|
| 109 |
</ul>
|
| 110 |
</div>
|
| 111 |
|