Instructions to use lightx2v/Qwen-Image-Lightning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Qwen-Image-Lightning with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("lightx2v/Qwen-Image-Lightning") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Request for instructions to correctly convert a model to fp8 with appropriate scaling
#39
by Pbgsgxyx - opened
Hi,
I am using some qwen finetunes and they all have the same issue as the ComfyUI released qwen model, where the fp8 model was produced by directly downcasting the original fp32/fp16 weights, rather than employing a calibrated conversion process with appropriate scaling, resulting in grid-like pattern when used with Qwen-Image-Lightning-4steps-V2.0.safetensors or Qwen-Image-Lightning-8steps-V2.0.safetensors.
Would it be possible to share the methodology/code used for the calibrated conversion process with appropriate scaling? I would like to convert these finetunes properly so that they can be used with your more recent lightning loras.