Text Generation
Transformers
Safetensors
minimax_m2
auto-round
int4
w4a16
quantization
Mixture of Experts
conversational
custom_code
4-bit precision
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Lasimeri/MiniMax-M2.7-int4-AutoRound", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Lasimeri/MiniMax-M2.7-int4-AutoRound", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
MiniMax-M2.7 INT4 AutoRound
4-bit quantized version of MiniMaxAI/MiniMax-M2.7 using Intel AutoRound.
Quantization Config
| Setting | Value |
|---|---|
| Scheme | W4A16 (INT4 weights, FP16 activations) |
| Group size | 128 |
| Ignored layers | MoE gate layers (kept at full precision) |
| Method | RTN (iters=0) |
Usage
vLLM
vllm serve Lasimeri/MiniMax-M2.7-int4-AutoRound \
--trust-remote-code \
--tensor-parallel-size 8 \
--enable-auto-tool-choice \
--tool-call-parser minimax_m2 \
--reasoning-parser minimax_m2_append_think
SGLang
python -m sglang.launch_server \
--model-path Lasimeri/MiniMax-M2.7-int4-AutoRound \
--trust-remote-code \
--tp 8 \
--reasoning-parser minimax-append-think \
--tool-call-parser minimax-m2
Quantization Hardware
Quantized on a single-node rig:
| Component | Spec |
|---|---|
| CPU | AMD EPYC 7742 (64C / 128T) |
| RAM | 251 GB DDR4 |
| GPUs | 8× RTX 3080 (20 GB modded) |
Peak resource usage during quantization: ~25.6 GB RAM, ~5 GB VRAM on GPU 0, ~1.3 GB on each remaining GPU.
- Downloads last month
- 2,551
Model tree for Lasimeri/MiniMax-M2.7-int4-AutoRound
Base model
MiniMaxAI/MiniMax-M2.7
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Lasimeri/MiniMax-M2.7-int4-AutoRound", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)