MiniMax-M2.7 Abliterated Heretic — FP8
FP8 dynamic (per-channel weight, per-token activation) quantization of Youssofal/MiniMax-M2.7-abliterated-BF16, which is itself a Heretic-method abliteration of MiniMaxAI/MiniMax-M2.7.
Lineage
- Base model: MiniMaxAI/MiniMax-M2.7 (229B MoE, 10B active, 256 experts, 200K ctx)
- Abliteration: Youssofal/MiniMax-M2.7-abliterated-BF16 — Heretic / Ablated Refusal Adaptation (ARA)
- This repo: FP8 quant of the above for fast vLLM inference
Format
- Weights:
float8_e4m3fn, per-output-channel symmetric scales (float32) - Activations: dynamic per-token FP8 at runtime
- KV cache: run with
--kv-cache-dtype fp8for full FP8 serving - Config:
compressed-tensors/format: float-quantized, ignored:lm_head - Tensors: 96,165 total, 47 safetensors shards, ~230 GB
Serve with vLLM
vllm serve LittleNicky55/MiniMax-M2.7-abliterated-Heretic-FP8 \
--tensor-parallel-size 2 \
--dtype bfloat16 \
--kv-cache-dtype fp8 \
--max-model-len 196608 \
--gpu-memory-utilization 0.92 \
--trust-remote-code \
--enable-prefix-caching
Fits comfortably in 2× H200 141GB (total VRAM budget ~230 GB + KV + compute).
Quantization method
Streaming per-shard quantization script: for each Linear weight W, compute
per-output-channel scale = |W|.amax(dim=1) / 448.0, then W_fp8 = (W / scale).to(fp8_e4m3fn).
No calibration data required (FP8_DYNAMIC scheme).
License
Inherits the non-commercial MiniMax M-Series license from the base model.
- Downloads last month
- 1,090
Model tree for LittleNicky55/MiniMax-M2.7-abliterated-Heretic-FP8
Base model
MiniMaxAI/MiniMax-M2.7 Finetuned
Youssofal/MiniMax-M2.7-abliterated-BF16