How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("John6666/wai-nsfw-illustrious-sdxl-v150-sdxl", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("agmjd/miyabi")

prompt = "UNICODE\u0000\u0000m\u0000a\u0000s\u0000t\u0000e\u0000r\u0000p\u0000i\u0000e\u0000c\u0000e\u0000,\u0000b\u0000e\u0000s\u0000t\u0000 \u0000q\u0000u\u0000a\u0000l\u0000i\u0000t\u0000y\u0000,\u0000v\u0000e\u0000r\u0000y\u0000 \u0000a\u0000e\u0000s\u0000t\u0000h\u0000e\u0000t\u0000i\u0000c\u0000,\u0000a\u0000b\u0000s\u0000u\u0000r\u0000d\u0000r\u0000e\u0000s\u0000,\u0000h\u0000i\u0000g\u0000h\u0000r\u0000e\u0000s\u0000,\u0000n\u0000e\u0000w\u0000e\u0000s\u0000t\u0000,\u0000"
image = pipe(prompt).images[0]
Downloads last month
-
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for agmjd/miyabi