MiniMax-M2.7 (MLX)
Collection
RAM quantized versions of MiniMaxAI/MiniMax-M2.7 for Apple Silicon. Multiple size points from 90 GB to 203 GB. • 5 items • Updated
Early feedback on this model is coding degrades, so might be worth going to the next size up if your system can support it.
A quantized build of MiniMaxAI/MiniMax-M2.7 produced by baa.ai.
| Property | Value |
|---|---|
| Size on disk | 90.1 GB |
| Format | MLX |
| Base model | MiniMaxAI/MiniMax-M2.7 |
from mlx_lm import load, generate
model, tokenizer = load("baa-ai/MiniMax-M2.7-RAM-90GB-MLX")
response = generate(
model, tokenizer,
prompt="Hello!",
max_tokens=512,
verbose=True,
)
Inherited from the upstream MiniMax-M2.7 license: non-commercial use permitted; commercial use requires written authorization from MiniMax.
4-bit
Base model
MiniMaxAI/MiniMax-M2.7