A Q2_K_S-Mixed GGUF version of MiniMaxAI/MiniMax-M2.1 generated with intel/auto-round, where the embedding layer and lm-head layer have 8-bit precision and the non-specialized layers have 4-bit precision.

Script for reproducing this model.
pip install transformers==4.56.0 torch==2.9.1 auto_round==0.9.4
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_round import AutoRound

model_name = "MiniMaxAI/MiniMax-M2.1"

model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cpu", trust_remote_code=True, dtype="auto")

tokenizer = AutoTokenizer.from_pretrained(model_name)

layer_config = {}

for n, m in model.named_modules():
    if n == "lm_head" or isinstance(m,torch.nn.Embedding):
        layer_config[n] = {"bits": 8}
    elif isinstance(m, torch.nn.Linear) and (not "expert" in n or "shared_experts" in n) and n != "lm_head":
        layer_config[n] = {"bits": 4}

autoround = AutoRound(model, tokenizer, iters=0, layer_config=layer_config, nsamples=512, disable_opt_rtn=False)

autoround.quantize_and_save(output_dir="./output", format="gguf:q2_k_s")
Downloads last month
123
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Felladrin/gguf-Q2_K_S-Mixed-AutoRound-MiniMax-M2.1

Quantized
(38)
this model