How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Heliosoph/mo-di-hyper-onnx", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Mo-Di Diffusion + Hyper-SD (4-step) โ€” ONNX

ONNX export of nitrosocke/mo-di-diffusion with the ByteDance/Hyper-SD 4-step LoRA fused into the UNet. SD 1.5 architecture, 512ร—512 native, Euler scheduler, CFG = 1, 4 steps.

Mo-Di is nitrosocke's "modern Disney style" fine-tune โ€” produces character art with the late-2010s Disney/Pixar look. Activator phrase commonly used upstream: modern disney style. Worth including in prompts for stronger style adherence.

Converted artifact. Training credit: nitrosocke (Mo-Di), ByteDance (Hyper-SD).

What this repo contains

model_index.json
feature_extractor/
scheduler/
text_encoder/
tokenizer/
unet/                   # Mo-Di UNet + Hyper-SD-15 4-step LoRA fused in
vae_decoder/
vae_encoder/

How it was produced

  1. Load nitrosocke/mo-di-diffusion via diffusers.
  2. Fuse ByteDance/Hyper-SD/Hyper-SD15-4steps-lora.safetensors.
  3. optimum-cli export onnx.

Toolchain: optimum 1.24.0, diffusers 0.31.0, transformers 4.45.2, torch 2.4.x (CUDA 12.4). Conversion script: scripts/export-mo-di-hyper.ps1.

Inference notes

Setting Value
Scheduler Euler
Steps 4
CFG / guidance scale 1.0
Negative prompt Skip
Resolution 512ร—512 native
Activator Include modern disney style in prompts for stronger adherence

License

CreativeML OpenRAIL-M (SD 1.5 + Mo-Di + Hyper-SD). License files included. By using this model you accept those terms.

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Heliosoph/mo-di-hyper-onnx

Quantized
(1)
this model