How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("vantagewithai/Sulphur-2-Base-Split", dtype=torch.bfloat16, device_map="cuda")
pipe.to("cuda")

prompt = "A man with short gray hair plays a red electric guitar."
image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
)

output = pipe(image=image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")

Split Version of Sulphur 2 for ComfyUI

Original model Link: https://huggingface.co/SulphurAI/Sulphur-2-base

Watch us at Youtube: @VantageWithAI

Sulphur 2

An uncensored video generation model based on LTX 2.3 supporting both t2v and i2v natively, as well as all of the other ltx 2.3 formats.


Credits

Funders

  • Anonymous funder #1 โ€” Supported the original Sulphur
  • Anonymous funder #2 โ€” Made Sulphur 2 possible; this model wouldn't exist without them

Thank you to everyone who contributed.

Downloads last month
1,208
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support