Instructions to use tryonlabs/FLUX.1-dev-LoRA-Outfit-Generator with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use tryonlabs/FLUX.1-dev-LoRA-Outfit-Generator with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("tryonlabs/FLUX.1-dev-LoRA-Outfit-Generator") prompt = "A dress with Color: Black, Department: Dresses, Detail: High Low, Fabric-Elasticity: No Sretch, Fit: Fitted, Hemline: Slit, Material: Gabardine, Neckline: Collared, Pattern: Solid, Sleeve-Length: Sleeveless, Style: Casual, Type: Tunic, Waistline: Regular" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Relying on gacha, there are often pictures of clothing worn on models
#1
by HUG-NAN - opened
Based on flux dev testing, the Lora weight is set to 1, simple+beta, and the image is generated in 25 steps. Images of people wearing clothing often appear, and even if the Lora weight is increased, there is a high probability of drawing the character image. Increasing the hyper8 steps of Lora results in a higher probability of generating a model image
Realized that also, the lora also helps with realism for some reason. I hypothesize that training on pictures of only clothes helped remove some DPO training