import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("chaitnya26/kontext-tryon7-fork")
prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(image=input_image, prompt=prompt).images[0]This is a batch-run banana model, used for practicing the kontext without mask outfit lora replacement effect.
All the example results were achieved by directly combining two images without using a mask.
Based on the test results, compared with the banana model, it has a greater advantage in terms of consistency.
The workflow for each image is similar, with only a slight adjustment of parameters. You can view the details by dragging the image into the comfyui.
This is the discussion on Reddit:
这是批量跑的香蕉模型,用来练的kontext 无蒙版换装lora 换的效果。
所有示例结果都是不使用蒙版,直接传两张图完成的换装。
从测试结果看对比香蕉模型,一致性方面更有优势。
每张图的工作流都差不多,仅稍微微调了一点参数,具体可把图片拖入到comfyui中查看。
在Reddit上的讨论:
- Downloads last month
- 44
Model tree for chaitnya26/kontext-tryon7-fork
Base model
black-forest-labs/FLUX.1-Kontext-dev




















