How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Liberata/illustrious-xl-v1.0", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Shero448/melpha")

prompt = "UNICODE\u0000\u0000g\u0000e\u0000n\u0000e\u0000r\u0000a\u0000l\u0000,\u0000h\u0000i\u0000g\u0000h\u0000r\u0000e\u0000s\u0000,\u0000 \u0000u\u0000l\u0000t\u0000r\u0000a\u0000-\u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000e\u0000d\u0000,\u0000v\u0000e\u0000r\u0000y\u0000 \u0000a\u0000e\u0000s\u0000t\u0000h\u0000e\u0000t\u0000i\u0000c\u0000,\u0000b\u0000e\u0000s\u0000t\u0000 \u0000q\u0000u\u0000a\u0000l\u0000i\u0000t\u0000y\u0000 \u0000,\u0000b\u0000e\u0000s\u0000t\u0000 \u0000h\u0000a\u0000n\u0000d\u0000s\u0000,\u0000b\u0000e\u0000s\u0000t\u0000 \u0000e\u0000y\u0000e\u0000s\u0000,\u0000e\u0000a\u0000s\u0000y\u0000n\u0000e\u0000g\u0000a\u0000t\u0000i\u0000v\u0000e\u0000,\u0000"
image = pipe(prompt).images[0]

melpha

Prompt
UNICODEgeneral,highres, ultra-detailed,very aesthetic,best quality ,best hands,best eyes,easynegative,

Trigger words

You should use Melpha to trigger the image generation.

You should use yellow_hair to trigger the image generation.

You should use long_hair to trigger the image generation.

You should use light blue_eyes to trigger the image generation.

You should use huge_breasts to trigger the image generation.

You should use glasses to trigger the image generation.

Download model

Download them in the Files & versions tab.

Downloads last month
4
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shero448/melpha