Qwen-Image-Edit-CPU / README.md
Nekochu's picture
Qwen-Image-Edit Rapid-AIO CPU Space
620e7e1
metadata
title: Qwen-Image-Edit CPU
emoji: 🎨
colorFrom: yellow
colorTo: indigo
sdk: docker
app_file: app.py
suggested_hardware: cpu-basic
startup_duration_timeout: 1h
pinned: false
license: mit
tags:
  - image-editing
  - text-to-image
  - qwen-image
  - cpu
  - gguf
  - mcp-server
short_description: Qwen-Image-Edit Rapid-AIO on CPU via GGUF
models:
  - Qwen/Qwen-Image-Edit-2511
  - Qwen/Qwen-Image-2512

Qwen-Image-Edit Rapid-AIO / Free CPU

Image editing and text-to-image via GGUF on free CPU hardware.

  • Diffusion: Rapid-AIO v23 Q3_K (Lightning pre-fused, 4 steps) from Arunk25
  • Text encoder: Qwen2.5-VL-7B-Instruct-abliterated Q3_K_M from mradermacher
  • VAE: pig_qwen_image_vae from calcuis
  • Alt model: Image-2512 Q3_K_M (txt2img only) switchable via dropdown
  • Engine: stable-diffusion.cpp (C++ inference, no PyTorch)
  • Speed: ~45 min per 512x512 image on 2 vCPU
  • RAM: ~14.9GB model + ~1GB activations

API

from gradio_client import Client, handle_file

client = Client("WeReCooking/Qwen-Image-Edit-CPU")
result = client.predict(
    prompt="transform into anime style",
    negative_prompt="worst quality, blurry",
    init_image=handle_file("photo.png"),
    model_choice="Rapid-AIO-v23 Q3 (edit)",
    aspect_ratio="Auto (match input, max 512px)",
    steps=4,
    cfg_scale=2.5,
    guidance=3.0,
    seed=-1,
    api_name="/infer",
)
print(result)

MCP

{
  "mcpServers": {
    "qwen-image-edit": {"url": "https://werecooking-qwen-image-edit-cpu.hf.space/gradio_api/mcp/sse"}
  }
}

CLI

python app.py "transform into anime style" -i photo.png -o edited.png
python app.py "a cat on a windowsill" -o cat.png --model "Image-2512 (txt2img)"

Credits