How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("camenduru/FLUX.1_Kontext-Lightning", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

Update 7/9/25: This model is now quantized and implemented in this example space. Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries.

FLUX.1 Kontext-dev X LoRA Experimentation

Highly experimental, will update with more details later.

  • 6-8 steps
  • Euler, SGM Uniform (Recommended, feel free to play around) Getting mixed results now, feel free to play around and share.

Model Details

Experimenting with FLUX.1-dev LoRAs and how it affects Kontext-dev. This model has been fused with acceleration LoRAs.

License

This model falls under the FLUX.1 [dev] Non-Commercial License, please familiarize yourself with the license.

Downloads last month
71
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for camenduru/FLUX.1_Kontext-Lightning

Finetuned
(57)
this model