made some changes to the nodes. do a git pull and re-run requirements.

MUST USE NEW TEST WORKFLOW!!!! there are new nodes to unload weight of first model.

if it runs for you, please post your generations in the discussions!!!!! i cant run it, its too big at 30gb of total size with everything.

model details below with node link.

  • V

EXPERIMENTAL! do not expect it to work until i change the model card giving confirmation that it does work. im constantly updating the nodes to get it to run! keep git pulling. just merged the files to match another contributors workflow. this will lessen the requirements for lower vram users. its still a 16gb fp8 model and the SR model to upscale it heavy as well. working on offloading implementation without sage or triton. it will be in the workflow once i test it. i will update the repo with a SEPERATE offloading workflow check back soon.

THIS MODEL WILL NOT RUN WITHOUT MY CUSTOM NODE SET. ITS CURATED FOR THIS MODEL SPECIFICALLY!

t5gemma text encoder gguf goes in "gguf" folder NOT text encoder folder

requires these custom nodes (i forked the node set to include fp8 models): https://github.com/RealRebelAI/ComfyUI_MagiHuman_fp8_ditFIX_nodes

files are in the repo

fp8 model - Diffusion models folder

text encoder gguf - gguf folder, NOT text encoder folder

wan vae - vae folder

sd audio vae - vae folder

turbo vae - vae folder

sr fp8 model - diffusion models folder

Downloads last month
100
GGUF
Model size
9B params
Architecture
gemma
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support