File size: 1,872 Bytes
de47a38
 
 
 
ef96501
49456b9
ef96501
 
 
 
 
de47a38
ef96501
de47a38
ef96501
de47a38
 
ef96501
de47a38
ef96501
008f624
ef96501
de47a38
ef96501
 
 
 
 
 
 
 
 
 
 
de47a38
ef96501
 
 
 
de47a38
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
base_model:
- Wan-AI/Wan2.1-T2V-1.3B
- Wan-AI/Wan2.1-T2V-14B
license: apache-2.0
pipeline_tag: text-to-video
tags:
- rcm
- consistency-models
- diffusion-distillation
- video-generation
---

# rCM: Score-Regularized Continuous-Time Consistency Model

[**Paper**](https://arxiv.org/abs/2510.08431) | [**Website**](https://research.nvidia.com/labs/dir/rcm) | [**Code**](https://github.com/NVlabs/rcm)

rCM is a framework for scaling up continuous-time consistency distillation to large-scale video diffusion models (up to 14B parameters). It enables high-fidelity video generation in only 1–4 steps, accelerating diffusion sampling by $15\times \sim 50\times$. This repository contains unofficial rCM models for Wan, reproduced by Tsinghua University.

The Wan2.2 rCM checkpoints are obtained by merging Wan2.1 rCM weights to Wan2.2 checkpoints, no extra training included. This should have the same effect as directly using the Wan2.1 rCM LoRAs and adjusting the strength.

## Inference

To run the models, please refer to the environment setup in the [official GitHub repository](https://github.com/NVlabs/rcm). 

Below is an example inference script for running rCM on T2V as found in the documentation:

```bash
# Example for Wan2.1 T2V 1.3B
PYTHONPATH=. python rcm/inference/wan2pt1_t2v_rcm_infer.py \
    --model_size 1.3B \
    --dit_path assets/checkpoints/rCM_Wan2.1_T2V_1.3B_480p.pt \
    --num_samples 5 \
    --prompt "A cinematic shot of a snowy mountain at sunrise"
```

## Citation

```bibtex
@article{zheng2025rcm,
  title={Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency},
  author={Zheng, Kaiwen and Wang, Yuji and Ma, Qianli and Chen, Huayu and Zhang, Jintao and Balaji, Yogesh and Chen, Jianfei and Liu, Ming-Yu and Zhu, Jun and Zhang, Qinsheng},
  journal={arXiv preprint arXiv:2510.08431},
  year={2025}
}
```