How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("ABGroup2/ImportLora")

prompt = "a pink crystal gem suspended in space, frstingln illustration"
image = pipe(prompt).images[0]

Frosting Lane Flux

Prompt
a pink crystal gem suspended in space, frstingln illustration
Negative Prompt
bad, messy
Prompt
a man wearing a hunters cap frstingln illustration
Negative Prompt
bad, messy
Prompt
a beautiful castle frstingln illustration
Negative Prompt
bad, messy
Prompt
a small girl with a big grin, confident, on their toes, holding a sign that says "I LOVE PROMPTS!" frstingln illustration
Negative Prompt
bad, messy

Model description

Flux Dev training of my Frosting Lane Redux Model

Curious to see what ya'll think! I actually don't think the trigger makes a massive difference.

Trigger words

You should use frstingln illustration to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
3
Inference Providers NEW
Examples

Model tree for ABGroup2/ImportLora

Adapter
(36842)
this model

Space using ABGroup2/ImportLora 1