12G vram failed to run
#2
by
ZKong
- opened
always compile even torch.compile(pipe.transformer) is disabled.
enable_model_cpu_offload still can over 12G, i have to use enable_sequential_cpu_offload but slower, maxvram is 3G, very small vram use!
ZKong
changed discussion status to
closed
