Instructions to use codemichaeld/wan2.1_2x_fp8_l02 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use codemichaeld/wan2.1_2x_fp8_l02 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("codemichaeld/wan2.1_2x_fp8_l02", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("codemichaeld/wan2.1_2x_fp8_l02", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]FP8 Model with Precision Recovery
- Source:
https://huggingface.co/spacepxl/Wan2.1-VAE-upscale2x - File:
Wan2.1_VAE_upscale2x_imageonly_real_v1.safetensors - FP8 Format:
E5M2 - Architecture: vae
- Precision Recovery Type: Correction Factors
- Precision Recovery File:
Wan2.1_VAE_upscale2x_imageonly_real_v1-correction-vae.safetensors - FP8 File:
Wan2.1_VAE_upscale2x_imageonly_real_v1-fp8-e5m2.safetensors
Usage (Inference)
from safetensors.torch import load_file
import torch
# Load FP8 model
fp8_state = load_file("Wan2.1_VAE_upscale2x_imageonly_real_v1-fp8-e5m2.safetensors")
# Load precision recovery file
recovery_state = load_file("Wan2.1_VAE_upscale2x_imageonly_real_v1-correction-vae.safetensors") if "Wan2.1_VAE_upscale2x_imageonly_real_v1-correction-vae.safetensors" else {}
# Reconstruct high-precision weights
reconstructed = {}
for key in fp8_state:
fp8_weight = fp8_state[key].to(torch.float32)
if recovery_state:
# For LoRA approach
if "lora_A" in recovery_state:
if f"lora_A.{key}" in recovery_state and f"lora_B.{key}" in recovery_state:
A = recovery_state[f"lora_A.{key}"].to(torch.float32)
B = recovery_state[f"lora_B.{key}"].to(torch.float32)
lora_weight = B @ A
reconstructed[key] = fp8_weight + lora_weight
else:
reconstructed[key] = fp8_weight
# For correction factor approach
elif f"correction.{key}" in recovery_state:
correction = recovery_state[f"correction.{key}"].to(torch.float32)
reconstructed[key] = fp8_weight + correction
else:
reconstructed[key] = fp8_weight
else:
reconstructed[key] = fp8_weight
Requires PyTorch ≥ 2.1 for FP8 support.
- Downloads last month
- 7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support