Kijai commited on
Commit
e75e572
·
verified ·
1 Parent(s): ccf05ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ Separated LTX2.3 checkpoint for alternative way to load the models in Comfy
13
  ![image](https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/KnDASZbjsLio38bS9XdVQ.png)
14
 
15
 
16
- The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked `input_scaled` additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware.
17
 
18
  As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill:
19
 
 
13
  ![image](https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/KnDASZbjsLio38bS9XdVQ.png)
14
 
15
 
16
+ The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked `input_scaled` additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware (roughly 40xx and later Nvidia GPUs).
17
 
18
  As this is first time I'm attempting to do calibrate input scales, these are pretty experimental, but result wise seems to work, this is a test on a 4090, 8 steps with distill:
19