Image-to-Image
Diffusers
flux
lora
replicate
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("fofr/flux-kontext-dev-ps1-lora")

prompt = "render this image like a ps1 game (no UI)"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

Flux Kontext PS1

Prompt
render this image like a ps1 game (no UI)
Prompt
render this image like a ps1 game (no UI)

About this LoRA

This is a LoRA for the FLUX.1-kontext-dev image-to-image model. It can be used with diffusers or ComfyUI.

It was trained on Replicate.

Trigger words

You should use render this image like a ps1 game (no UI) to trigger the image generation.

Contribute your own examples

You can use the community tab to add images that show off what you’ve made with this LoRA

Downloads last month
46
Inference Providers NEW
Examples

Model tree for fofr/flux-kontext-dev-ps1-lora

Adapter
(239)
this model

Spaces using fofr/flux-kontext-dev-ps1-lora 2