Yume-1.5: A Text-Controlled Interactive World Generation Model
Paper • 2512.22096 • Published • 61
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("stdstu123/Yume-5B-720P", dtype=torch.bfloat16, device_map="cuda")
pipe.to("cuda")
prompt = "A man with short gray hair plays a red electric guitar."
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
)
output = pipe(image=image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")Yume-1.5 is a framework designed to generate realistic, interactive, and continuous worlds from a single image or text prompt. It supports keyboard-based exploration of the generated environments through a framework that integrates context compression and real-time streaming acceleration.
For detailed installation and setup instructions, please refer to the GitHub repository.
To perform image-to-video generation using the provided scripts:
# Generate videos from images in the specified directory
bash scripts/inference/sample_jpg.sh --jpg_dir="./jpg" --caption_path="./caption.txt"
If you use Yume for your research, please cite the following:
@article{mao2025yume,
title={Yume: An Interactive World Generation Model},
author={Mao, Xiaofeng and Lin, Shaoheng and Li, Zhen and Li, Chuanhao and Peng, Wenshuo and He, Tong and Pang, Jiangmiao and Chi, Mingmin and Qiao, Yu and Zhang, Kaipeng},
journal={arXiv preprint arXiv:2507.17744},
year={2025}
}
@article{mao2025yume,
title={Yume-1.5: A Text-Controlled Interactive World Generation Model},
author={Mao, Xiaofeng and Li, Zhen and Li, Chuanhao and Xu, Xiaojie and Ying, Kaining and He, Tong and Pang, Jiangmiao and Qiao, Yu and Zhang, Kaipeng},
journal={arXiv preprint arXiv:2512.22096},
year={2025}
}