Add model card for Kiwi-Edit

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: diffusers
3
+ pipeline_tag: image-to-video
4
+ ---
5
+
6
+ # Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance
7
+
8
+ Kiwi-Edit is a versatile video editing framework built on an MLLM encoder and a video Diffusion Transformer (DiT). It supports:
9
+ - **Instruction Video Editing**: Modify video content through text prompts.
10
+ - **Reference Image Guidance**: Use a reference image to guide editing for higher visual fidelity and precise control.
11
+
12
+ The model synergizes learnable queries and latent visual features for reference semantic guidance, achieving significant gains in instruction following and reference fidelity.
13
+
14
+ - **Project Page:** [https://showlab.github.io/Kiwi-Edit/](https://showlab.github.io/Kiwi-Edit/)
15
+ - **Paper:** [Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance](https://huggingface.co/papers/2603.02175)
16
+ - **Repository:** [https://github.com/showlab/Kiwi-Edit](https://github.com/showlab/Kiwi-Edit)
17
+
18
+ ## Usage
19
+
20
+ To use Kiwi-Edit for inference, follow the installation instructions in the [official repository](https://github.com/showlab/Kiwi-Edit). You can run a quick test on a demo video using the following command:
21
+
22
+ ```bash
23
+ python diffusers_demo.py \
24
+ --video_path ./demo_data/video/source/0005e4ad9f49814db1d3f2296b911abf.mp4 \
25
+ --prompt "Remove the monkey." \
26
+ --save_path output.mp4 --model_path linyq/kiwi-edit-5b-instruct-only-diffusers
27
+ ```
28
+
29
+ ## Citation
30
+
31
+ If you use Kiwi-Edit in your research, please cite the following work:
32
+
33
+ ```bibtex
34
+ @misc{kiwiedit,
35
+ title={Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance},
36
+ author={Yiqi Lin and Guoqiang Liang and Ziyun Zeng and Zechen Bai and Yanzhe Chen and Mike Zheng Shou},
37
+ year={2026},
38
+ eprint={2603.02175},
39
+ archivePrefix={arXiv},
40
+ primaryClass={cs.CV},
41
+ url={https://arxiv.org/abs/2603.02175},
42
+ }
43
+ ```