Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis
Paper • 2505.10046 • Published • 9
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("ooutlierr/fuse-dit", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]
You can download the pre-trained model and then use FuseDiTPipeline in our codebase to run inference:
import torch
from diffusion.pipelines import FuseDiTPipeline
pipeline = FuseDiTPipeline.from_pretrained("/path/to/pipeline/").to("cuda")
image = pipeline(
"your prompt",
width=512,
height=512,
num_inference_steps=25,
guidance_scale=6.0,
use_cache=True,
)[0][0]
image.save("test.png")
@article{tang2025exploringdeepfusion,
title={Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis},
author={Bingda Tang and Boyang Zheng and Xichen Pan and Sayak Paul and Saining Xie},
year={2025},
journal={arXiv preprint arXiv:2505.10046},
}