Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ Separated LTX2.3 checkpoint for alternative way to load the models in Comfy
|
|
| 13 |

|
| 14 |
|
| 15 |
|
| 16 |
-
The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked `input_scaled` additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware.
|
| 17 |
|
| 18 |
As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill:
|
| 19 |
|
|
|
|
| 13 |

|
| 14 |
|
| 15 |
|
| 16 |
+
The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked `input_scaled` additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs).
|
| 17 |
|
| 18 |
As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill:
|
| 19 |
|