YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

sd15-onnx-fp32

ONNX optimized version of runwayml/stable-diffusion-v1-5 with FP32 precision for maximum compatibility.

Available Components

  • unet: FP32 optimized
  • vae_decoder: FP32 optimized
  • vae_encoder: FP32 optimized
  • text_encoder: FP32 optimized

Usage

Basic CPU Usage

from optimum.onnxruntime import ORTStableDiffusionPipeline

# Models use FP32 for maximum compatibility
pipe = ORTStableDiffusionPipeline.from_pretrained(
    "Mitchins/sd15-onnx-fp32",
    provider="CPUExecutionProvider"
)

result = pipe("a red apple on a table")
result.images[0].save("output.png")

GPU Usage (CUDA)

pipe = ORTStableDiffusionPipeline.from_pretrained(
    "Mitchins/sd15-onnx-fp32",
    provider="CUDAExecutionProvider"
)

Performance Benefits

  • Compatibility: Works reliably on CPU and GPU
  • Speed: ONNX runtime optimizations
  • Stability: No type mismatch issues
  • Quality: Full FP32 precision

File Structure

All models are FP32 for compatibility:

unet/

  • model.onnx (1.2MB + 3278.8MB data) - FP16 precision

vae_decoder/

  • model.onnx (188.9MB) - FP16 precision

vae_encoder/

  • model.onnx (130.4MB) - FP16 precision

text_encoder/

  • model.onnx (469.7MB) - FP16 precision

Generated: 2025-08-08 11:01 UTC with onnxruntime 1.22.1

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support