metadata
library_name: mlx
tags:
- mlx
- quantized
- mixed-precision
- minimax
- minimax_m2
- moe
license: other
license_name: minimax-m2-license
license_link: LICENSE
base_model: MiniMaxAI/MiniMax-M2.7
base_model_relation: quantized
pipeline_tag: text-generation
language: en
MiniMax-M2.7 — 155 GB (MLX)
Earlier build of the MiniMax-M2.7 mixed-precision MLX family, by baa.ai.
Current builds
For updated builds with HumanEval results and recommended inference settings, see:
| Variant | Size | Link |
|---|---|---|
| 100 GB | 100.1 GB | baa-ai/MiniMax-M2.7-RAM-100GB-MLX |
| 111 GB | 110.9 GB | baa-ai/MiniMax-M2.7-RAM-111GB-MLX |
| 116 GB | 116.0 GB | baa-ai/MiniMax-M2.7-RAM-116GB-MLX |
| 120 GB | 120.1 GB | baa-ai/MiniMax-M2.7-RAM-120GB-MLX |
Usage
from mlx_lm import load, generate
model, tokenizer = load("baa-ai/MiniMax-M2.7-RAM-155GB-MLX")
response = generate(model, tokenizer, prompt="Hello!", max_tokens=512)
print(response)
License
Inherited from the upstream MiniMax-M2.7 license: non-commercial use permitted; commercial use requires written authorization from MiniMax.
Quantized by baa.ai