How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("John6666/prefect-illustrious-xl-v15-sdxl", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Shero448/BlackWhiplash")

prompt = "1girl, solo, bakugou mitsuki, purple shirt, ribbed sweater, blonde hair, mature female, portrait, looking at viewer, simple background, presenting, depth of field, "
image = pipe(prompt).images[0]

blaash

Prompt
1girl, solo, bakugou mitsuki, purple shirt, ribbed sweater, blonde hair, mature female, portrait, looking at viewer, simple background, presenting, depth of field,
Negative Prompt
lowres, worst quality, low quality, bad anatomy, bad hands, multiple views, 4koma, censored, jpeg artifacts

Trigger words

You should use blaash to trigger the image generation.

Download model

Download them in the Files & versions tab.

Downloads last month
1
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shero448/BlackWhiplash