Instructions to use AEmotionStudio/kiwi-edit-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use AEmotionStudio/kiwi-edit-instruct with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("AEmotionStudio/kiwi-edit-instruct", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
File size: 2,272 Bytes
ab11fc4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ---
pipeline_tag: image-to-video
library_name: diffusers
---
# Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance
Kiwi-Edit is a versatile video editing framework built on an MLLM encoder and a video Diffusion Transformer (DiT). It supports both natural language instruction-based video editing and combined reference image + instruction video editing.
[[Paper](https://huggingface.co/papers/2603.02175)] [[Project Page](https://showlab.github.io/Kiwi-Edit)] [[GitHub](https://github.com/showlab/Kiwi-Edit)]
## Introduction
Instruction-based video editing has witnessed rapid progress, yet current methods often struggle with precise visual control. Kiwi-Edit introduces a unified editing architecture that synergizes learnable queries and latent visual features for reference semantic guidance. By leveraging a scalable data generation pipeline and the RefVIE dataset, the model achieves significant gains in instruction following and reference fidelity, establishing a new state-of-the-art in controllable video editing.
## Quick Start
### Installation (Diffusers Environment)
```bash
# Create conda environment
conda create -n diffusers python=3.10 -y
conda activate diffusers
# Install PyTorch 2.7 with CUDA
pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
pip install diffusers decord einops accelerate transformers==4.57.0 opencv-python av
```
### Inference Sample
You can run a quick test on a demo video using the script provided in the official repository:
```bash
python diffusers_demo.py \
--video_path ./demo_data/video/source/0005e4ad9f49814db1d3f2296b911abf.mp4 \
--prompt "Remove the monkey." \
--save_path output.mp4 --model_path linyq/kiwi-edit-5b-instruct-only-diffusers
```
## Citation
If you use Kiwi-Edit in your research, please cite the following paper:
```bibtex
@misc{kiwiedit,
title={Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance},
author={Yiqi Lin and Guoqiang Liang and Ziyun Zeng and Zechen Bai and Yanzhe Chen and Mike Zheng Shou},
year={2026},
eprint={2603.02175},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.02175},
}
``` |