How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("codemichaeld/FramePainter_UnetQunatizedFP8", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

FP8 Pruned Model (E5M2)

Converted from: Yabo/FramePainter
File: unet_diffusion_pytorch_model.safetensorsunet_diffusion_pytorch_model-fp8-e5m2.safetensors

Quantization: FP8 (E5M2)
Converted by: codemichaeld
Date: 2025-12-01 06:10:01

⚠️ FP8 models require PyTorch ≥ 2.1 and compatible hardware (e.g., NVIDIA Ada/Hopper) for full acceleration. May fall back to FP16 on older GPUs.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support