Instructions to use lightx2v/Qwen-Image-Lightning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Qwen-Image-Lightning with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("lightx2v/Qwen-Image-Lightning") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
omg yet again , explanation needed.
#35
by pikkaa - opened
what is this Qwen-Image-fp8-e4m3fn-Lightning-4steps-V1.0 vs this Qwen-Image-Lightning-4steps-V2.0 ?
both is speed lora, first one is newer than v2 one , what differ here ?
just look at release date and use Qwen-Image-Edit-2509 as it's different and newest
just look at release date and use Qwen-Image-Edit-2509 as it's different and newest
was don't know that the 2509 made for 2509 model, thought maybe just qi v2 lora may be better , sorry my bad
btw this Qwen-Image-fp8-e4m3fn-Lightning-4steps-V1.0-fp32.safetensors
released earlier by 2 days . then its still dedicated to the qwen image not qwen image edit , right ?