How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("RunDiffusion/Juggernaut-XL-v9", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("ApathyGhost/SynthModelGal_DMD2")

prompt = "-"
image = pipe(prompt).images[0]

SynthGals

Prompt
-

Model description

Synthwave and Cyberpunk neon girlies. Bit janky, intend to have a V2 out that is much more user friendly. Start with standard-ish LCM/DMD2 settings and go from there.

With a DMD2 LORA, faces can be a bti janky without Face Restore

CFG - 1-2

Steps - 8

Sampler - LCM

Clip Skip - 2

Trigger words

You should use SynthModel to trigger the image generation.

Download model

Download them in the Files & versions tab.

Downloads last month
24
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ApathyGhost/SynthModelGal_DMD2