| | --- |
| | license: other |
| | license_name: ltx-2-community-license-agreement |
| | license_link: https://github.com/Lightricks/LTX-2/blob/main/LICENSE |
| | tags: |
| | - comfyui |
| | - diffusion-single-file |
| | --- |
| | |
| | Separated LTX2.3 checkpoint for alternative way to load the models in Comfy |
| |
|
| |
|
| |  |
| |
|
| |
|
| | The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked `input_scaled` additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs). |
| |
|
| | As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill: |
| |
|
| | <video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/ALNr3_j0klp29fHkI3pyt.mp4></video> |
| |
|
| |
|
| | Tiny VAE by [madebyollin](https://github.com/madebyollin/taehv/) |
| |
|
| | Can be used like this currently: |
| |
|
| |  |
| |
|
| |
|