deepseek-v3.2-speciale-distilled-raptor-32b-4bit-i1-GGUF ?

#1834
by Rebis - opened

Hi,
Is it possible to make a i1-GGUF version of this MLX model ?
https://huggingface.co/srswti/deepseek-v3.2-speciale-distilled-raptor-32b-4bit
Thank you in advance.

I dont think we can accept MLX as model source, I will try but I really really doubt it will quant

It's queued!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#deepseek-v3.2-speciale-distilled-raptor-32b-4bit-GGUF for quants to appear.

Sign up or log in to comment