Instructions to use lightx2v/Qwen-Image-Lightning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Qwen-Image-Lightning with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("lightx2v/Qwen-Image-Lightning") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
which qwen_image models does this work with?
I have downloaded
qwen_image_q4_k_m gguf
this Lora doesn't seem to be working which models have you been working with even the qwen_image_fp8_e4m3fn model is not working
the images all come out either really blurry or low contrast which means i am not using the correct number of steps.
Yes, I found the same problems, when using quantized GGUF qwen-image models, all the 4/8 steps or bf16/non-bf16 loras,
With the correct number of steps, the results are just blur images.
Works perfectly fine with GGUF here.
(and most/many tutorials on youtube for Qwen Image + LightX lora uses GGUF too, so you can even see it work great https://youtu.be/erj5YlR9hvE?t=117 )
Depends on what you mean by blurry though. The Qwen Image has a bit "soft" look, with or without low step lora.s
ok so I found the problem, this seems to a problem with the comfy ui with lora processing, so far the youtube videos are all using the nightly builds so they are all getting it fine. But i am using an installtion of comfyui which is set to stable so its still bugged for me. BUT right now comfy pushed out an update so its working now!! woohooo