MiniMax-M2.7 Abliterated Heretic — FP8

FP8 dynamic (per-channel weight, per-token activation) quantization of Youssofal/MiniMax-M2.7-abliterated-BF16, which is itself a Heretic-method abliteration of MiniMaxAI/MiniMax-M2.7.

Lineage

Format

  • Weights: float8_e4m3fn, per-output-channel symmetric scales (float32)
  • Activations: dynamic per-token FP8 at runtime
  • KV cache: run with --kv-cache-dtype fp8 for full FP8 serving
  • Config: compressed-tensors / format: float-quantized, ignored: lm_head
  • Tensors: 96,165 total, 47 safetensors shards, ~230 GB

Serve with vLLM

vllm serve LittleNicky55/MiniMax-M2.7-abliterated-Heretic-FP8 \
  --tensor-parallel-size 2 \
  --dtype bfloat16 \
  --kv-cache-dtype fp8 \
  --max-model-len 196608 \
  --gpu-memory-utilization 0.92 \
  --trust-remote-code \
  --enable-prefix-caching

Fits comfortably in 2× H200 141GB (total VRAM budget ~230 GB + KV + compute).

Quantization method

Streaming per-shard quantization script: for each Linear weight W, compute per-output-channel scale = |W|.amax(dim=1) / 448.0, then W_fp8 = (W / scale).to(fp8_e4m3fn). No calibration data required (FP8_DYNAMIC scheme).

License

Inherits the non-commercial MiniMax M-Series license from the base model.

Downloads last month
1,090
Safetensors
Model size
229B params
Tensor type
F32
·
F16
·
F8_E4M3
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LittleNicky55/MiniMax-M2.7-abliterated-Heretic-FP8

Quantized
(8)
this model