How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("ppbrown/kl-f8ch32-alpha1", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

kl-f8ch32alpha1

This is alpha, because it is NOT RELEASE QUALITY.

It was created from the tools in https://github.com/ppbrown/sd15_vae-f8c32

It started from the sd vae f8c4 with extra channels squeezed in, and retrained to take advantage of them. To a point.

Right now, it's better than the original vae, but NOT as good as flux2's 32channel vae, or even ostris's f8c16.

I'm looking for ways to get the final finess into it.

Here's a comparison of its current state: https://www.instarestoration.com/slider/oarYf8HiElu5n1EvE-VZY

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support