Instructions to use lint/anime_vae with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lint/anime_vae with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lint/anime_vae", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("lint/anime_vae", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]VAE for anime style checkpoints
Converted pastel-waifu-diffusion.vae.pt in https://huggingface.co/andite/pastel-mix to diffusers format.
Example usage
from diffusers import AutoencoderKL, StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained(
pretrained_model_name_or_path='andite/anything-v4.0',
vae = AutoencoderKL.from_pretrained('lint/anime_vae')
)
for component in pipe.components.values():
if hasattr(component, 'device'):
component.to('cuda', torch.float16)
out = pipe('1girl, blue eyes')
out.images[0]
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support