How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Twinwaffle/Dysfunctional_epoch_5")

prompt = "UNICODE\u0000\u0000e\u0000a\u0000c\u0000h\u0000 \u0000f\u0000a\u0000c\u0000e\u0000 \u0000o\u0000c\u0000c\u0000u\u0000p\u0000y\u0000i\u0000n\u0000g\u0000 \u0000a\u0000 \u0000s\u0000e\u0000c\u0000t\u0000i\u0000o\u0000n\u0000 \u0000o\u0000f\u0000 \u0000t\u0000h\u0000e\u0000 \u0000c\u0000a\u0000n\u0000v\u0000a\u0000s\u0000.\u0000 \u0000 \u0000T\u0000h\u0000e\u0000 \u0000w\u0000o\u0000m\u0000a\u0000n\u0000 \u0000o\u0000n\u0000 \u0000t\u0000h\u0000e\u0000 \u0000f\u0000a\u0000r\u0000 \u0000l\u0000e\u0000f\u0000t\u0000 \u0000h\u0000a\u0000s\u0000 \u0000a\u0000 \u0000s\u0000t\u0000r\u0000i\u0000k\u0000i\u0000n\u0000g\u0000,\u0000 \u0000a\u0000b\u0000s\u0000t\u0000r\u0000a\u0000c\u0000t\u0000 \u0000h\u0000u\u0000m\u0000a\u0000n\u0000 \u0000f\u0000i\u0000g\u0000u\u0000r\u0000e\u0000s\u0000 \u0000w\u0000i\u0000t\u0000h\u0000 \u0000l\u0000a\u0000r\u0000g\u0000e\u0000,\u0000 \u0000w\u0000i\u0000t\u0000h\u0000 \u0000h\u0000e\u0000r\u0000 \u0000e\u0000y\u0000e\u0000s\u0000 \u0000w\u0000i\u0000d\u0000e\u0000 \u0000o\u0000p\u0000e\u0000n\u0000 \u0000a\u0000n\u0000d\u0000 \u0000a\u0000 \u0000l\u0000o\u0000o\u0000k\u0000 \u0000o\u0000f\u0000 \u0000s\u0000h\u0000o\u0000c\u0000k\u0000 \u0000o\u0000r\u0000 \u0000f\u0000e\u0000a\u0000r\u0000.\u0000 \u0000H\u0000e\u0000r\u0000 \u0000s\u0000k\u0000i\u0000n\u0000 \u0000i\u0000s\u0000 \u0000a\u0000 \u0000p\u0000a\u0000l\u0000e\u0000,\u0000 \u0000e\u0000x\u0000a\u0000g\u0000g\u0000e\u0000r\u0000a\u0000t\u0000e\u0000d\u0000 \u0000f\u0000i\u0000g\u0000u\u0000r\u0000e\u0000s\u0000 \u0000w\u0000i\u0000t\u0000h\u0000 \u0000e\u0000l\u0000o\u0000n\u0000g\u0000a\u0000t\u0000e\u0000d\u0000,\u0000 \u0000c\u0000o\u0000l\u0000o\u0000r\u0000f\u0000u\u0000l\u0000,\u0000 \u0000h\u0000a\u0000n\u0000d\u0000-\u0000s\u0000t\u0000i\u0000t\u0000c\u0000h\u0000e\u0000d\u0000 \u0000q\u0000u\u0000i\u0000l\u0000t\u0000 \u0000f\u0000e\u0000a\u0000t\u0000u\u0000r\u0000i\u0000n\u0000g\u0000 \u0000a\u0000 \u0000s\u0000t\u0000y\u0000l\u0000i\u0000z\u0000e\u0000d\u0000,\u0000 \u0000s\u0000w\u0000i\u0000r\u0000l\u0000i\u0000n\u0000g\u0000 \u0000l\u0000i\u0000n\u0000e\u0000s\u0000 \u0000s\u0000u\u0000g\u0000g\u0000e\u0000s\u0000t\u0000i\u0000n\u0000g\u0000 \u0000a\u0000 \u0000f\u0000o\u0000r\u0000e\u0000s\u0000t\u0000 \u0000o\u0000r\u0000 \u0000f\u0000o\u0000l\u0000i\u0000a\u0000g\u0000e\u0000 \u0000b\u0000a\u0000c\u0000k\u0000g\u0000r\u0000o\u0000u\u0000n\u0000d\u0000.\u0000 \u0000H\u0000e\u0000r\u0000 \u0000e\u0000y\u0000e\u0000s\u0000 \u0000a\u0000r\u0000e\u0000 \u0000c\u0000l\u0000o\u0000s\u0000e\u0000d\u0000,\u0000 \u0000g\u0000i\u0000v\u0000i\u0000n\u0000g\u0000 \u0000i\u0000t\u0000 \u0000a\u0000 \u0000t\u0000e\u0000x\u0000t\u0000u\u0000r\u0000e\u0000d\u0000,\u0000 \u0000w\u0000h\u0000i\u0000t\u0000e\u0000,\u0000 \u0000c\u0000r\u0000e\u0000a\u0000t\u0000i\u0000n\u0000g\u0000 \u0000a\u0000 \u0000p\u0000a\u0000t\u0000c\u0000h\u0000y\u0000"
image = pipe(prompt).images[0]

dysfunctional-1-epoch5

Prompt
UNICODEeach face occupying a section of the canvas. The woman on the far left has a striking, abstract human figures with large, with her eyes wide open and a look of shock or fear. Her skin is a pale, exaggerated figures with elongated, colorful, hand-stitched quilt featuring a stylized, swirling lines suggesting a forest or foliage background. Her eyes are closed, giving it a textured, white, creating a patchy

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
6
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Twinwaffle/Dysfunctional_epoch_5

Adapter
(398)
this model