Instructions to use lightx2v/Qwen-Image-Edit-2511-Lightning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Qwen-Image-Edit-2511-Lightning with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("lightx2v/Qwen-Image-Edit-2511-Lightning") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Diffusion Single File
How to use lightx2v/Qwen-Image-Edit-2511-Lightning with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("lightx2v/Qwen-Image-Edit-2511-Lightning")
prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(image=input_image, prompt=prompt).images[0]Qwen-Image-Edit-2511-Lightning
Model Overview
Qwen-Image-Edit-2511-Lightning is a collection of optimized models tailored for image editing tasks, leveraging step distillation and quantization techniques to deliver high-efficiency inference performance. This repository hosts three core model files with distinct characteristics:
| Model File Name | Type | Key Features |
|---|---|---|
Qwen-Image-Edit-2511-Lightning-4steps-V1.0-bf16.safetensors |
4-step Distilled LoRA | BF16 precision, lightweight, 4-step inference |
Qwen-Image-Edit-2511-Lightning-4steps-V1.0-fp32.safetensors |
4-step Distilled LoRA | FP32 precision, high accuracy, 4-step inference |
qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning.safetensors |
FP8 Quantized | FP8 (e4m3fn scaled) precision, fused with 4-step distilled LoRA, optimized for low-memory deployment |
Usage Instructions
This model suite supports two mainstream usage frameworks, with detailed guides provided below:
1. Qwen-Image-Lightning Framework
For full documentation on model usage within the Qwen-Image-Lightning ecosystem (including environment setup, inference pipelines, and customization), please refer to: Qwen-Image-Lightning GitHub Repository
2. LightX2V Framework
The models are fully compatible with the LightX2V lightweight video/image generation inference framework. For step-by-step usage examples, configuration templates, and performance optimization tips, see: LightX2V Qwen Image Edit Documentation
Key Optimizations
- Step Distillation: The LoRA models reduce the original inference steps to just 4 steps, achieving significant speedup (β10x faster than standard 40-step inference) while preserving image editing quality.
- FP8 Quantization: The quantized base model balances performance and resource efficiency, reducing GPU memory usage by ~50% compared to FP32 while maintaining editing fidelity.
Support
For technical issues, feature requests, or integration questions:
- Open an issue in the Qwen-Image-Lightning repo (for Qwen framework-specific questions)
- Open an issue in the LightX2V repo (for LightX2V integration questions)
- Downloads last month
- 225,538
Model tree for lightx2v/Qwen-Image-Edit-2511-Lightning
Base model
Qwen/Qwen-Image-Edit-2511