How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Lasimeri/MiniMax-M2.7-int4-AutoRound", trust_remote_code=True)
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Lasimeri/MiniMax-M2.7-int4-AutoRound", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Lasimeri/MiniMax-M2.7-int4-AutoRound", trust_remote_code=True)
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

MiniMax-M2.7 INT4 AutoRound

4-bit quantized version of MiniMaxAI/MiniMax-M2.7 using Intel AutoRound.

Quantization Config

Setting Value
Scheme W4A16 (INT4 weights, FP16 activations)
Group size 128
Ignored layers MoE gate layers (kept at full precision)
Method RTN (iters=0)

Usage

vLLM

vllm serve Lasimeri/MiniMax-M2.7-int4-AutoRound \
  --trust-remote-code \
  --tensor-parallel-size 8 \
  --enable-auto-tool-choice \
  --tool-call-parser minimax_m2 \
  --reasoning-parser minimax_m2_append_think

SGLang

python -m sglang.launch_server \
  --model-path Lasimeri/MiniMax-M2.7-int4-AutoRound \
  --trust-remote-code \
  --tp 8 \
  --reasoning-parser minimax-append-think \
  --tool-call-parser minimax-m2

Quantization Hardware

Quantized on a single-node rig:

Component Spec
CPU AMD EPYC 7742 (64C / 128T)
RAM 251 GB DDR4
GPUs 8× RTX 3080 (20 GB modded)

Peak resource usage during quantization: ~25.6 GB RAM, ~5 GB VRAM on GPU 0, ~1.3 GB on each remaining GPU.

Downloads last month
2,551
Safetensors
Model size
32B params
Tensor type
I32
·
F16
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for Lasimeri/MiniMax-M2.7-int4-AutoRound

Quantized
(104)
this model