Minimax M2

open source MiniMax-M2 is a model built for Max coding & agentic workflows. In this gguf format, we provide quantized models that can be used by both ollama and LM studio.

Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models.

Use the model in ollama

First download and install ollama.

https://ollama.com/download

Command

in windows command line, or in terminal in ubuntu, type:

ollama run hf.co/John1604/MiniMax-M2-gguf:q4_k_m

(q4_k_m is the model quant type. q2_k, q3_k_m, ..., can also be used)

C:\Users\developer>ollama run hf.co/John1604/MiniMax-M2-gguf:q3_k_m
pulling manifest
...
verifying sha256 digest
writing manifest
success
>>>

After you run command: ollama run hf.co/John1604/MiniMax-M2-gguf:q4_k_m, it will appear in ollama UI - you may select this model hf.co/John1604/MiniMax-M2-gguf:q4_k_m from the model list, and run it the same way as other ollama supported models.

Use the model in LM Studio

download and install LM Studio

https://lmstudio.ai/

Discover models

In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed.

In the "Mission Control" search bar, type "John1604/MiniMax-M2-gguf" and check "GGUF", the model should be found.

Download the model.

You may choose quantized model.

Load the model.

Ask questions.

quantized models (unbiased)

Type Bits Quality Description
Q2_K 2-bit πŸŸ₯ Low Minimal footprint; only for tests
Q3_K_S 3-bit 🟧 Low β€œSmall” variant (less accurate)
Q3_K_M 3-bit 🟧 Low–Med β€œMedium” variant
Q4_K_S 4-bit 🟨 Med Small, faster, slightly less quality
Q4_K_M 4-bit 🟩 Med–High β€œMedium” β€” best 4-bit balance
Q5_K_S 5-bit 🟩 High Slightly smaller than Q5_K_M
Q5_K_M 5-bit 🟩🟩 High Excellent general-purpose quant
Q6_K 6-bit 🟩🟩🟩 Very High Almost FP16 quality, larger size
Q8_0 8-bit 🟩🟩🟩🟩 Near-lossless baseline

Q8_0 is only working in LM studio. It needs to be merged to one gguf locally if one wanted to run it in ollama.

For same reason, I do not upload Q6_K because it has to merge to run in ollama.

Downloads last month
283
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for John1604/MiniMax-M2-gguf

Quantized
(45)
this model