MMaDA-Parallel-M / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Add pipeline tag, library, project link, and sample usage
4aaf375 verified
|
raw
history blame
2.09 kB
---
license: mit
library_name: transformers
pipeline_tag: any-to-any
---
# MMaDA-Parallel-M
We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
## Note: This version is still in development; artifacts during generation can be seen.
[Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel) | [Project Page](https://tyfeld.github.io/mmadaparellel.github.io/)
<div align="center">
<img src="https://github.com/tyfeld/MMaDA-Parallel/raw/main/assets/demos.png"/>
</div>
## Sample Usage
This example demonstrates how to perform parallel generation using MMaDA-Parallel-A. Make sure you have installed the necessary dependencies as outlined in the [GitHub repository](https://github.com/tyfeld/MMaDA-Parallel).
```bash
cd MMaDA-Parallel-A
python inference.py \
--checkpoint tyfeld/MMaDA-Parallel-A \
--vae_ckpt tyfeld/MMaDA-Parallel-A \
--prompt "Replace the laptops with futuristic transparent tablets displaying holographic screens, and change the drink to a cup of glowing blue energy drink." \
--image_path examples/image.png \
--height 512 \
--width 512 \
--timesteps 64 \
--text_steps 128 \
--text_gen_length 256 \
--text_block_length 32 \
--cfg_scale 0 \
--cfg_img 4.0 \
--temperature 1.0 \
--text_temperature 0 \
--text_temperature 0 \
--seed 42 \
--output_dir output/results_interleave
```
# Citation
```
@article{tian2025mmadaparallel,
title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
journal={arXiv preprint arXiv:2511.09611},
year={2025}
}
```