Instructions to use Heliosoph/dreamshaper-hyper-onnx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Heliosoph/dreamshaper-hyper-onnx with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Heliosoph/dreamshaper-hyper-onnx", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Heliosoph/dreamshaper-hyper-onnx", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]DreamShaper + Hyper-SD (4-step) โ ONNX
ONNX export of Lykon/DreamShaper with the ByteDance/Hyper-SD 4-step LoRA fused into the UNet. SD 1.5 architecture, 512ร512 native, designed to run with the Euler scheduler at CFG = 1 in 4 inference steps.
DreamShaper is Lykon's stylized SFW fine-tune โ leans more illustrative / fantasy than AbsoluteReality, which is more photorealistic. Pick this one for D&D-style art, character portraits with painterly textures, and concept-art-leaning prompts.
This is a converted artifact, not a new model. All training credit belongs to Lykon (DreamShaper) and ByteDance (Hyper-SD).
What this repo contains
model_index.json
feature_extractor/
scheduler/
text_encoder/
tokenizer/
unet/ # DreamShaper UNet + Hyper-SD-15 4-step LoRA fused in
vae_decoder/
vae_encoder/
unet/model.onnx is paired with unet/model.onnx_data (external-weights file).
How it was produced
- Load
Lykon/DreamShaperviadiffusers(bundled VAE). - Load
ByteDance/Hyper-SD/Hyper-SD15-4steps-lora.safetensorsviapeft,fuse_lora()it into the UNet. - Save the fused pipeline to a temp directory.
optimum-cli export onnx --model <temp> <output>.
Toolchain: optimum 1.24.0, diffusers 0.31.0, transformers 4.45.2, torch 2.4.x (CUDA 12.4). Full conversion script: scripts/export-dreamshaper-hyper.ps1.
Inference notes
| Setting | Value |
|---|---|
| Scheduler | Euler |
| Steps | 4 |
| CFG / guidance scale | 1.0 |
| Negative prompt | Skip |
| Resolution | 512ร512 native |
License
CreativeML OpenRAIL-M (inherited from SD 1.5 + DreamShaper) + the Hyper-SD LoRA's OpenRAIL-M. Both license files are included in this repo. By using this model you accept those terms.
Citation
@misc{lykon-dreamshaper,
author = {Lykon},
title = {DreamShaper},
howpublished = {\url{https://huggingface.co/Lykon/DreamShaper}}
}
@article{ren2024hypersd,
title = {Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis},
author = {Ren, Yuxi and others},
journal = {arXiv preprint arXiv:2404.13686},
year = {2024}
}
- Downloads last month
- 36
Model tree for Heliosoph/dreamshaper-hyper-onnx
Base model
Lykon/DreamShaper