Intel/Qwen3.5-122B-A10B-int4-AutoRound

#1919
by saadsafi - opened

can you please convert this repo to GGUFs:
https://huggingface.co/Intel/Qwen3.5-122B-A10B-int4-AutoRound

AutoRound is probably the most accurate quantization methos, I hope the conversion is straightforward.
Many thanks

I dont want to quant again already quantized model,
so here you are the quants of the original quality model: https://huggingface.co/mradermacher/Qwen3.5-122B-A10B-GGUF

Sign up or log in to comment