Model requests?
Do you take NVFP4 model requests?
Sure, hit me
mistralai/Devstral-2-123B-Instruct-2512 π
Been using your GLM 4.7 Flash religiously
I started it, but it will take awhile to finish and test.
Perfect, thaaanks!
Hi GadflyII,
Can I request as well an NVFP4 quantization? π
Your Qwen3-Coder-NEXT-NVFP4 rocks (my friends are using it), but I cannot load as low on VRAM.
https://huggingface.co/GadflyII/Qwen3-Coder-Next-NVFP4
So if you could do from the REAP (48B) version of the original 80B, I would be awesome!
(From this: https://huggingface.co/Mattepiu/Qwen3-Coder-Next-REAP-48B-A3B )
The MXFP4 version I tested and despite the 48B size still usable:
https://huggingface.co/noctrex/Qwen3-Coder-Next-REAP-48B-A3B-MXFP4_MOE-GGUF
But I would need NVFP4 to leverage from the Blackwell architecture... (2x5060 Ti 16gb)
Thank you very much in advance!
I will look at the reap model, I am not sure if I can quant after they reap the weights, I may have to make nvfp4, then reap them
Oh, I see, I would really appreciate if you could do that!
(please keep in mind that this should be to fitted on a 2x16Gb, so must be no more then ~25Gb π)
Thank you in advance.
@GadflyII could you do this model? https://huggingface.co/huihui-ai/Huihui-gpt-oss-120b-BF16-abliterated
Qwen3.5 35b finially here π
I think many of us would need an NVFP4 of this:
https://huggingface.co/Qwen/Qwen3.5-35B-A3B
As no one created yet a NVFP4 of this π
If you could do that I would really appriciate that!