Improve model card: add pipeline tag, paper/code links, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +42 -14
README.md CHANGED
@@ -1,24 +1,25 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - video-generation
5
- - game-rendering
6
- - game-editing
7
- - diffusion
8
- - g-buffer
9
- - relighting
10
- - text-to-video
11
- - wan2.1
12
- pipeline_tag: text-to-video
13
  base_model: Wan-AI/Wan2.1-T2V-1.3B
14
  datasets:
15
- - custom
16
  library_name: diffusers
 
 
 
 
 
 
 
 
 
 
17
  ---
18
 
19
  # Game Editing
20
 
21
- **Game Editing** is a fine-tuned video diffusion model for controllable game video synthesis. It enables users to manipulate lighting and environmental effects in game footage via text prompts, conditioned on G-buffer inputs.
 
 
22
 
23
  ## Model Details
24
 
@@ -31,6 +32,24 @@ library_name: diffusers
31
  | **Clip Length** | 81 frames |
32
  | **Format** | SafeTensors |
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ## Inputs
35
 
36
  The model takes the following inputs:
@@ -76,4 +95,13 @@ In the absence of directly comparable methods, we establish a baseline by adapti
76
 
77
  ## Citation
78
 
79
- If you find this model useful, please consider citing our work.
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  base_model: Wan-AI/Wan2.1-T2V-1.3B
3
  datasets:
4
+ - custom
5
  library_name: diffusers
6
+ license: apache-2.0
7
+ pipeline_tag: image-to-video
8
+ tags:
9
+ - video-generation
10
+ - game-rendering
11
+ - game-editing
12
+ - diffusion
13
+ - g-buffer
14
+ - relighting
15
+ - wan2.1
16
  ---
17
 
18
  # Game Editing
19
 
20
+ **Game Editing** is a fine-tuned video diffusion model for controllable game video synthesis, presented in the paper [Generative World Renderer](https://huggingface.co/papers/2604.02329). It enables users to manipulate lighting and environmental effects in game footage via text prompts, conditioned on G-buffer inputs.
21
+
22
+ [**Project Page**](https://alaya-studio.github.io/renderer) | [**GitHub Repository**](https://github.com/ShandaAI/AlayaRenderer) | [**arXiv**](https://arxiv.org/abs/2604.02329)
23
 
24
  ## Model Details
25
 
 
32
  | **Clip Length** | 81 frames |
33
  | **Format** | SafeTensors |
34
 
35
+ ## Sample Usage
36
+
37
+ To run inference, please follow the installation instructions in the [official repository](https://github.com/ShandaAI/AlayaRenderer). Below is an example command for running the game editing model:
38
+
39
+ ```bash
40
+ cd game_editing
41
+
42
+ CUDA_VISIBLE_DEVICES=0 python \
43
+ examples/wanvideo/model_inference/inference_gbuffer_caption.py \
44
+ --checkpoint models/train/Wan2.1-T2V-1.3B_gbuffer/model.safetensors \
45
+ --gpu 0 \
46
+ --style snowy_winter \
47
+ --prompt "the scene is set in a frozen, snow-covered environment under cold, pale winter light with falling snowflakes, creating a silent and ethereal winter wonderland atmosphere." \
48
+ --gbuffer_dir test_dataset \
49
+ --save_dir outputs/ \
50
+ --num_frames 81 --height 480 --width 832
51
+ ```
52
+
53
  ## Inputs
54
 
55
  The model takes the following inputs:
 
95
 
96
  ## Citation
97
 
98
+ If you find this model useful, please consider citing the following work:
99
+
100
+ ```bibtex
101
+ @article{huang2026generativeworldrenderer,
102
+ title={Generative World Renderer},
103
+ author={Zheng-Hui Huang and Zhixiang Wang and Jiaming Tan and Ruihan Yu and Yidan Zhang and Bo Zheng and Yu-Lun Liu and Yung-Yu Chuang and Kaipeng Zhang},
104
+ journal={arXiv preprint arXiv:2604.02329},
105
+ year={2026}
106
+ }
107
+ ```