How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("FireRedTeam/FireRed-Image-Edit-1.0-Lightning", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

Please refer to FireRed-Image-Edit-1.0 to learn how to use the distilled edit lora.

use with diffusers 🧨:

make sure to install diffusers from main (pip install git+https://github.com/huggingface/diffusers.git)

from diffusers import QwenImageEditPlusPipeline
import torch 
from PIL import Image

pipe = QwenImageEditPlusPipeline.from_pretrained(
    "FireRedTeam/FireRed-Image-Edit-1.0", torch_dtype=torch.bfloat16,
).to("cuda")
pipe.load_lora_weights(
    "FireRedTeam/FireRed-Image-Edit-1.0-Lightning", weight_name="FireRed-Image-Edit-1.0-Lightning-8steps-v1.0.safetensors"
)

prompt = "在书本封面Python的下方,添加一行英文文字2nd Edition"
input_image_path = "./examples/edit_example.png"
input_image_raw = Image.open(input_image_path).convert('RGB')

image = pipe(
    image = [input_image_raw],
    prompt = prompt,
    height = None,
    width = None,
    num_inference_steps = 8,
    generator=torch.manual_seed(0),
    true_cfg_scale=1.0,  # 不使用标准 CFG
).images[0]
image.save("firered_image_edit_fewsteps.png")
Downloads last month
6,150
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for FireRedTeam/FireRed-Image-Edit-1.0-Lightning

Finetuned
(7)
this model

Collection including FireRedTeam/FireRed-Image-Edit-1.0-Lightning