No way to load these quantized models into ultrashape workflow.

#1
by Saptansu - opened

Please provide any guide to connect those quantized model to existing ultrashape nodes (official comfyui), it only supports .pt and your quants are in .safetensor and .gguf

I tried to tweak the ultrashape custom nodes's node.py to make it load .safetensor and loaded yours and got multiple model mismatch errors, sampling errors etc.

Please please provide the guide to run your weights. I am very very tight in terms of resources like 12G VRAM + 16G DRAM. The quantized is my only save from mental destruction.

Owner

Thanks for your interest! Here is the base repo to run the GGUF quants:
Aero-Ex/UltraShape-1.0

Here is an example command to run:

python scripts/infer_dit_refine.py \
  --config configs/infer_dit_refine.yaml \
  --gguf /path/ultrashape_v1-Q8.gguf \
  --vae /path/ultrashape_vae.safetensors \
  --conditioner /path/ultrashape_conditioner.safetensors \
  --image inputs/image/1.png \
  --mesh inputs/coarse_mesh/1.glb \
  --num_latents 16384 \
  --steps 20 \
  --chunk_size 8000 \
  --low_vram \
  --octree_res 1024 \
  --high_token_mode \
  --seq_cfg

Sign up or log in to comment