The stated VRAM requirements on Github seem almost impossibly high for both versions of the model relative to the claimed parameter count

#1
by DiffusionFanatic1 - opened

Is this perhaps related to the current hardcoded requirement of full precision T5-XXL, or something?

Linum AI org

Yep, it's due to the T5. You can just free the T5 after this line --

https://github.com/Linum-AI/linum-v2/blob/298b1bb9186b5b9ff60331ee44de746734a79075/linum_v2/models/text2video.py#L288

Later this week, we'll make a PR to have the T5 free itself (optionally) in order to reduce overall VRAM.

Sign up or log in to comment