Any chance we will see a lower quant?
#6
by
realrebelai
- opened
I am constrained to 8gb of vram and only 16gb of ram to allocate so i was womdering if its possible to quant the model down to 10gb or less? I would be able to house the model and text encoder at that size
need more ram maybe , my test workflow text_encoder need high ram and Vram, transformer need 5G Vram if run 81frames 512*768