Please add quality Q6/Q4/Q3 quants to this
I kinda think running this with lower RAM is possible, but not sure if the quality loss would be too dramatic. And REAP is known to be a bit sucky
(Also instead of GGUF format as some have made it, SafeTensor for MLX and vLLM-esque platforms)
Awq or gptq would be nice :D
On it, no worries!
Thank you <3 been trying it myself but it didn't work out yet :D
@cpatonn could you also quant some Nemotron-H and Granite-4.0-H? (and maybe Jet-Nemotron-2B / Nemotron-Flash-3B-Instruct / Jet-Nemotron-4B / Nemotron-H-4B-Instruct-128K cus the quantizer on MLX-Community sometimes won't work with SSM/Linear)
Yeah sure! I do have Granite-4.0-H models quantized, but the current vllm implementation is not compatible with compressed-tensors INT4 quants.
Are you interested in MLX quants? As I am considering to make MLX quants in the future.
@cpatonn Please do both "regular" (dynamic or UD or whatever between Q3 and Q6) vLLM quants AND MLX quants. We need sub-4B models and sub-8B linear attention models to get more popular