New model gguf request. MiniMax-M2

#1480
by testamentaddress01 - opened

I request GGUF quantization.
https://huggingface.co/MiniMaxAI/MiniMax-M2

aaand... it's queued. cheers!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#MiniMax-M2-GGUF for quants to appear.

unfortunately, MiniMaxM2ForCausalLM ius not currently supported by llama.cpp

Let's follow https://github.com/ggml-org/llama.cpp/pull/16831. I'm supper excited for this model.

@mradermacher
There are only imatrix quants up to Q4_K_S. Will the missing ones still come? (like Q5_K_M)
https://huggingface.co/mradermacher/MiniMax-M2-i1-GGUF/tree/main

@inputout As you can see under https://hf.tst.eu/status.html the imatrix quants for this model are currently still being generated. The largest imatrix quants uploaded so far is i1-i1-Q6_K. I'm as well waiting for i1-Q5_K_M as it is the one I'm always using.

Oh right, more and more are coming, I was too impatient :-) Thank you for your work.

Sign up or log in to comment