Text-to-Image
Diffusers
Safetensors
English
StableDiffusionPipeline
stable-diffusion
stable-diffusion-diffusers
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("wavymulder/wavyfusion", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]Wavyfusion
CKPT DOWNLOAD LINK - This is a dreambooth trained on a very diverse dataset ranging from photographs to paintings. The goal was to make a varied, general purpose model for illustrated styles.
In your prompt, use the activation token: wa-vy style
Gradio
We support a Gradio Web UI to run wavyfusion:
We use wa-vy instead of wavy because 'wavy style' introduced unwanted oceans and wavy hair.
Trained from 1.5 with VAE.
There are a lot of cool styles you can achieve with this model. Please see this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.
And here is an batch of 49 images (not cherrypicked) in both euler_a and DPM++ 2M Karras
Special thanks to Nitrosocke and Guizmus
- Downloads last month
- 198

