Yume-5B-720P / README.md
nielsr's picture
nielsr HF Staff
Improve model card metadata and add paper links
9f99132 verified
|
raw
history blame
1.8 kB
metadata
license: apache-2.0
pipeline_tag: image-to-video
tags:
  - Text-to-Video
  - Image-to-Video
  - Diffusion Video Model
  - World Model

Yume-1.5: A Text-Controlled Interactive World Generation Model

Yume-1.5 is a framework designed to generate realistic, interactive, and continuous worlds from a single image or text prompt. It supports keyboard-based exploration of the generated environments through a framework that integrates context compression and real-time streaming acceleration.

Features

  • Long-video generation: Unified context compression with linear attention.
  • Real-time acceleration: Powered by bidirectional attention distillation.
  • Text-controlled events: Method for generating specific world events via text prompts.

Usage

For detailed installation and setup instructions, please refer to the GitHub repository.

Inference Example

To perform image-to-video generation using the provided scripts:

# Generate videos from images in the specified directory
bash scripts/inference/sample_jpg.sh --jpg_dir="./jpg" --caption_path="./caption.txt"

Citation

If you use Yume for your research, please cite the following:

@article{mao2025yume,
  title={Yume: An Interactive World Generation Model},
  author={Mao, Xiaofeng and Lin, Shaoheng and Li, Zhen and Li, Chuanhao and Peng, Wenshuo and He, Tong and Pang, Jiangmiao and Chi, Mingmin and Qiao, Yu and Zhang, Kaipeng},
  journal={arXiv preprint arXiv:2507.17744},
  year={2025}
}