rti-dp-scale / README.md
nielsr's picture
nielsr HF Staff
Add model card for RTI-DP
90a5197 verified
|
raw
history blame
2.49 kB
metadata
pipeline_tag: robotics
library_name: diffusers
license: mit

Real-Time Iteration Scheme for Diffusion Policy (RTI-DP)

This repository contains the official model weights and code for the paper: "Real-Time Iteration Scheme for Diffusion Policy".

Diffusion Policies have demonstrated impressive performance in robotic manipulation tasks. However, their long inference time, stemming from extensive iterative denoising, limits their applicability to latency-critical tasks. Inspired by the Real-Time Iteration (RTI) Scheme from optimal control, RTI-DP introduces a novel approach to significantly reduce inference time without the need for additional training or policy redesign. This scheme accelerates optimization by leveraging solutions from previous time steps as initial guesses, enabling seamless integration into many pre-trained diffusion-based models and making them suitable for real-time robotic applications with comparable performance.

RTI-DP Teaser

Usage

This model is designed to be used with its official codebase. For detailed installation instructions, environment setup, and further information, please refer to the official GitHub repository, which is based on Diffusion Policy.

Evaluation

To evaluate RTI-DP policies with DDPM, you can use the provided script from the repository:

python ../eval_rti.py --config-name=eval_diffusion_rti_lowdim_workspace.yaml

For RTI-DP-scale checkpoints, refer to the duandaxia/rti-dp-scale on Hugging Face.

Citation

If you find our work useful, please consider citing our paper:

@misc{duan2025rtidp,
    title={Real-Time Iteration Scheme for Diffusion Policy},
    author={Yufei Duan and Hang Yin and Danica Kragic},
    year={2025},
}

Acknowledgements

We thank the authors of Diffusion Policy, Consistency Policy and Streaming Diffusion Policy for sharing their codebase.