Update README.md
Browse files
README.md
CHANGED
|
@@ -33,7 +33,8 @@ This model is jointly finetuned with [DMD](https://arxiv.org/pdf/2405.14867) and
|
|
| 33 |
- 3-step inference is supported and achieves up to **50x speed up** for denoising loop on a single **H100** GPU.
|
| 34 |
- Our model is trained on **61×448×832** resolution, but it supports generating videos with any resolution.(quality may degrade)
|
| 35 |
- Finetuning and inference scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository:
|
| 36 |
-
- [
|
|
|
|
| 37 |
- [Inference script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_dmd.sh)
|
| 38 |
- Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**, and also support **Mac** users!
|
| 39 |
|
|
@@ -42,9 +43,6 @@ This model is jointly finetuned with [DMD](https://arxiv.org/pdf/2405.14867) and
|
|
| 42 |
Training was conducted on **8 nodes with 64 H200 GPUs** in total, using a `global batch size = 64`.
|
| 43 |
We enable `gradient checkpointing`, set `HSDP_shard_dim = 8`, `sequence_parallel_size = 4`, and use `learning rate = 1e-5`.
|
| 44 |
We set **VSA attention sparsity** to 0.9, and training runs for **3000 steps (~52 hours)**
|
| 45 |
-
The detailed **training example script** is available [here](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/distill/Wan-Syn-480P/distill_dmd_VSA_t2v_14B_480P.slurm).
|
| 46 |
-
|
| 47 |
-
|
| 48 |
|
| 49 |
If you use FastWan2.1-T2V-14B-480P-Diffusers model for your research, please cite our paper:
|
| 50 |
```
|
|
|
|
| 33 |
- 3-step inference is supported and achieves up to **50x speed up** for denoising loop on a single **H100** GPU.
|
| 34 |
- Our model is trained on **61×448×832** resolution, but it supports generating videos with any resolution.(quality may degrade)
|
| 35 |
- Finetuning and inference scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository:
|
| 36 |
+
- [1 Node/GPU debugging finetuning script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/distill/v1_distill_dmd_wan_VSA.sh)
|
| 37 |
+
- [Slurm training example script](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/distill/Wan-Syn-480P/distill_dmd_VSA_t2v_14B_480P.slurm)
|
| 38 |
- [Inference script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_dmd.sh)
|
| 39 |
- Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**, and also support **Mac** users!
|
| 40 |
|
|
|
|
| 43 |
Training was conducted on **8 nodes with 64 H200 GPUs** in total, using a `global batch size = 64`.
|
| 44 |
We enable `gradient checkpointing`, set `HSDP_shard_dim = 8`, `sequence_parallel_size = 4`, and use `learning rate = 1e-5`.
|
| 45 |
We set **VSA attention sparsity** to 0.9, and training runs for **3000 steps (~52 hours)**
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
If you use FastWan2.1-T2V-14B-480P-Diffusers model for your research, please cite our paper:
|
| 48 |
```
|