Any plans for the fine tuned S2V?
There is a S2V that would greatly benefit with NVFP4 conversions so it can tun on weaker hardware.
I had tried to convert models and failed with black screen outputs the night before you posted them. So yours works. My card is now able to run at speed instead of crawling near vram limits. It looks almost the same quality from recent tests.
Thanks.
I take it you don't know DaSiWa creator from his post earlier today. Its better to ask before any redistribution. It may not seem a big deal as there is lot of work/effort fine tuning went into creating to get to its current state.
The reason you got black screen outputs was probably because the model is originally in fp8_scaled, which has an fp8 tensor and a scale weight (which the whole tensor is multiplied with). If you forget to multiply, your scaling is off, usually leads to black outputs.
Also the model has both metadata and comfy_quant binary tensors, the newer version of my script also writes to both, this was pretty annoying but writing to both fixed it trying to read nvfp4 as fp8.
I notice the NVFP4 version is censored in some areas. What would be the cause for this?
What do you mean censored?