How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("SeanScripts/pyramid-flow-sd3-bf16", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Converted to bfloat16 from rain1011/pyramid-flow-sd3. Use the text encoders and tokenizers from that repo (or from SD3), no point reuploading them over and over unchanged.

Inference code is available here: github.com/jy0205/Pyramid-Flow.

Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.

I highly recommend using cpu_offloading=True when generating, unless you have more than 24 GB VRAM.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SeanScripts/pyramid-flow-sd3-bf16

Finetuned
(2)
this model