How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("alvdansen/softpasty-flux-dev")

prompt = "three friends walking down a street, street fashion, araminta_illus illustration style"
image = pipe(prompt).images[0]

Soft Pasty (Flux Dev)

Prompt
three friends walking down a street, street fashion, araminta_illus illustration style
Prompt
a girl wearing a flower crown, araminta_illus illustration style
Prompt
a girl with brown-blonde hair and big round glasses, tired, white tank top, jeans, araminta_illus illustration style
Prompt
girl,neck tuft,white hair,sheep horns,blue eyes, araminta_illus illustration style
Prompt
a little boy in a sailor suit frowning, araminta_illus illustration style
Prompt
a woman with flowers on her dress standing in the moonlight, araminta_illus illustration style

Model description

Here's a model trained entirely on one style of my own illustrations:

003.png 006.png 022.png

Quite happy with the results and happy to share the model. Big thank you to Glif.app for sponsoring the training!

ComfyUI_00818_.png

Trigger words

You should use araminta_illus illustration style to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
368
Inference Providers NEW
Examples

Model tree for alvdansen/softpasty-flux-dev

Adapter
(36855)
this model

Spaces using alvdansen/softpasty-flux-dev 87

Collection including alvdansen/softpasty-flux-dev