Instructions to use ByteDance/Hyper-SD with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ByteDance/Hyper-SD with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("ByteDance/Hyper-SD") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
What is difference in UNET And normal LoRA
Hi~
The difference mainly lies on the number of parameters fine-tuned.
For the UNet fully fine-tuning, our model would get learned better.
While for the LoRA training, only the injected sparse matrix would be updated and fine-tuned.
To clarify, our 1-Step SDXL UNet is NOT merged directly from SDXL-Base and 1-Step LoRA.
Instead, it's fully fine-tuned and re-trained from SDXL-Base. Thanks for your attention❤️!
so its just like sdxl checkpoint
? and not a lora?
i mean the unet is a checkpoint?
(sorry im new into this)
@roktimsardar123
Do you mean the "checkpoint" in ComfyUI?
In ComfyUI, only the weights with all UNet, VAE and text encoder are called "checkpoint" and should be placed under models/checkpoints folder.
But during training, saved model weights can all be called checkpoints in practical.
The Hyper-SDXL-1step-Unet.safetensors we provide is a raw UNet checkpoint without VAE and text encoder. It can't be used as a "checkpoint" in ComfyUI. It can be loaded by diffusers integration like pipelines.
While the Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors we provide is a "checkpoint" in ComfyUI which contains all UNet, VAE and text encoder.