Update README.md
Browse files
README.md
CHANGED
|
@@ -65,7 +65,7 @@ torchrun --nproc_per_node=$NUM_GPUS generate.py --size 704*1280 --dit_fsdp --t5_
|
|
| 65 |
```
|
| 66 |
Tips:
|
| 67 |
If you want to use the base model, you can use "--use_base_model --num_inference_steps 50". Otherwise if you want to generating the interactive videos with your own input actions, you can use "--interactive".
|
| 68 |
-
With multiple GPUs, you can pass `--use_async_vae --async_vae_warmup_iters 1` to speed up inference
|
| 69 |
|
| 70 |
## ⭐ Acknowledgements
|
| 71 |
- [Diffusers](https://github.com/huggingface/diffusers) for their excellent diffusion model framework
|
|
@@ -74,8 +74,6 @@ With multiple GPUs, you can pass `--use_async_vae --async_vae_warmup_iters 1` to
|
|
| 74 |
- [LightX2V](https://github.com/ModelTC/lightx2v) for their excellent quantization framework
|
| 75 |
- [Wan2.2](https://github.com/Wan-Video/Wan2.2) for their strong base model
|
| 76 |
- [lingbot-world](https://github.com/Robbyant/lingbot-world) for their context parallel framework
|
| 77 |
-
## 📜 License
|
| 78 |
-
This project is licensed under the Apache License, Version 2.0 — see [LICENSE.txt](LICENSE.txt).
|
| 79 |
|
| 80 |
## 📖 Citation
|
| 81 |
If you find this work useful for your research, please kindly cite our paper:
|
|
|
|
| 65 |
```
|
| 66 |
Tips:
|
| 67 |
If you want to use the base model, you can use "--use_base_model --num_inference_steps 50". Otherwise if you want to generating the interactive videos with your own input actions, you can use "--interactive".
|
| 68 |
+
With multiple GPUs, you can pass `--use_async_vae --async_vae_warmup_iters 1` to speed up inference.
|
| 69 |
|
| 70 |
## ⭐ Acknowledgements
|
| 71 |
- [Diffusers](https://github.com/huggingface/diffusers) for their excellent diffusion model framework
|
|
|
|
| 74 |
- [LightX2V](https://github.com/ModelTC/lightx2v) for their excellent quantization framework
|
| 75 |
- [Wan2.2](https://github.com/Wan-Video/Wan2.2) for their strong base model
|
| 76 |
- [lingbot-world](https://github.com/Robbyant/lingbot-world) for their context parallel framework
|
|
|
|
|
|
|
| 77 |
|
| 78 |
## 📖 Citation
|
| 79 |
If you find this work useful for your research, please kindly cite our paper:
|