This is a custom quant of MiniMaxAI/MiniMax-M2.1 that has the following:

  • Q8_0 for the default quantization type (attention, shared experts, etc.)
  • Q4_K for the FFN_UP and FFN_GATE tensors
  • Q5_K for the FFN_DOWN tensors

The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization.

This model was produced using Bartowski's imatrix.

Model is additionally split with --no-tensor-first-split to enable easier editing of metadata.

Downloads last month
48
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Doctor-Shotgun/MiniMax-M2.1-GGUF

Quantized
(41)
this model