Model requests?

#4
by pathosethoslogos - opened

Do you take NVFP4 model requests?

Sure, hit me

mistralai/Devstral-2-123B-Instruct-2512 😊

Been using your GLM 4.7 Flash religiously

I started it, but it will take awhile to finish and test.

Perfect, thaaanks!

Hi GadflyII,

Can I request as well an NVFP4 quantization? 😊

Your Qwen3-Coder-NEXT-NVFP4 rocks (my friends are using it), but I cannot load as low on VRAM.
https://huggingface.co/GadflyII/Qwen3-Coder-Next-NVFP4

So if you could do from the REAP (48B) version of the original 80B, I would be awesome!
(From this: https://huggingface.co/Mattepiu/Qwen3-Coder-Next-REAP-48B-A3B )

The MXFP4 version I tested and despite the 48B size still usable:
https://huggingface.co/noctrex/Qwen3-Coder-Next-REAP-48B-A3B-MXFP4_MOE-GGUF

But I would need NVFP4 to leverage from the Blackwell architecture... (2x5060 Ti 16gb)

Thank you very much in advance!

I will look at the reap model, I am not sure if I can quant after they reap the weights, I may have to make nvfp4, then reap them

Oh, I see, I would really appreciate if you could do that!
(please keep in mind that this should be to fitted on a 2x16Gb, so must be no more then ~25Gb 😊)
Thank you in advance.

@tkg61 GPT OSS 120B model is already 4bit.

@pathosethoslogos i realize but the BF16 in the name is sort of confusing then…

Qwen3.5 35b finially here πŸš€
I think many of us would need an NVFP4 of this:
https://huggingface.co/Qwen/Qwen3.5-35B-A3B
As no one created yet a NVFP4 of this πŸ˜€

If you could do that I would really appriciate that!

Sign up or log in to comment