How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("ovi054/extract-clothes-kontext-dev-lora")

prompt = "extract only the clothes over a plain background, product photography style"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

extract clothes kontext dev lora

Model description

Trigger words

You should use extract only the clothes over a plain background, product photography style to trigger the image generation.

It is recommened to use specific clothing name, for eg. shirt, pant, dress etc for better results. Example prompt is: extract only the shirt over a plain background, product photography style

πŸ“Š Examples

Input Output
ref1 res1
ref2 res2
ref3 res3
ref4 res4
ref5 res5
ref6 res4

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Training at fal.ai

Training was done using fal.ai/models/fal-ai/flux-kontext-trainer.

Downloads last month
130
Inference Providers NEW

Model tree for ovi054/extract-clothes-kontext-dev-lora

Adapter
(239)
this model

Spaces using ovi054/extract-clothes-kontext-dev-lora 3