Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Edit Models filters
Main
Tasks
Libraries
Languages
Licenses
Other
1
Model Tree
Reset
MiniMaxAI/MiniMax-M2.5
Adapters
Finetunes
Quantizations
Merges
Apps
llama.cpp
LM Studio
Jan
Draw Things
DiffusionBee
JoyFusion
vLLM
Ollama
MLX LM
Docker Model Runner
Lemonade
SGLang
Unsloth
Pi
Inference Providers
Select all
Groq
Novita
Cerebras
SambaNova
Nscale
fal
Hyperbolic
Together AI
Fireworks
Featherless AI
Zai
Replicate
Cohere
Scaleway
Public AI
OVHcloud AI Endpoints
HF Inference API
WaveSpeed
Misc
Inference Endpoints
text-generation-inference
Eval Results (legacy)
text-embeddings-inference
4-bit precision
Merge
custom_code
8-bit precision
Mixture of Experts
Carbon Emissions
Eval Results
Apply filters
Models
66
Full-text search
Inference Available
Edit filters
Sort: Trending
Active filters:
MiniMaxAI/MiniMax-M2.5
Clear all
nvidia/MiniMax-M2.5-NVFP4
Text Generation
•
116B
•
Updated
8 days ago
•
10.6k
•
17
unsloth/MiniMax-M2.5-GGUF
Text Generation
•
229B
•
Updated
Feb 14
•
63.7k
•
216
lukealonso/MiniMax-M2.5-NVFP4
130B
•
Updated
23 days ago
•
36.9k
•
45
lukealonso/MiniMax-M2.5-REAP-139B-A10B-NVFP4
80B
•
Updated
Feb 23
•
27.7k
•
32
mlx-community/MiniMax-M2.5-3bit
Text Generation
•
229B
•
Updated
Feb 13
•
1.68k
•
9
ahoybrotherbear/MiniMax-M2.5-4bit-MLX
Text Generation
•
229B
•
Updated
Feb 13
•
223
•
1
mlx-community/MiniMax-M2.5-6bit
Text Generation
•
229B
•
Updated
Feb 13
•
2.25k
•
4
inferencerlabs/MiniMax-M2.5-MLX-6.5bit
Text Generation
•
229B
•
Updated
Feb 13
•
256
•
2
lmstudio-community/MiniMax-M2.5-MLX-8bit
Text Generation
•
229B
•
Updated
Feb 13
•
68.5k
•
3
MikeRoz/MiniMax-M2.5-exl3
Text Generation
•
Updated
Feb 16
•
16
•
8
mratsim/MiniMax-M2.5-BF16-INT4-AWQ
Text Generation
•
39B
•
Updated
Feb 17
•
17.8k
•
38
QuantTrio/MiniMax-M2.5-AWQ
Text Generation
•
229B
•
Updated
Feb 16
•
61.7k
•
12
mratsim/MiniMax-M2.5-FP8-INT4-AWQ
Text Generation
•
39B
•
Updated
Feb 17
•
10.1k
•
19
wimmmm/MiniMax-M2.5-REAP-172B-A10B-GGUF
173B
•
Updated
15 days ago
•
546
•
2
ahmed-ali/MiniMax-M2.5-MLX-6.5bit
Text Generation
•
229B
•
Updated
Feb 20
•
247
•
2
JANGQ-AI/MiniMax-M2.5-JANG_3L
Text Generation
•
26B
•
Updated
8 days ago
•
412
•
2
ox-ox/MiniMax-M2.5-GGUF
Text Generation
•
229B
•
Updated
Feb 13
•
9.58k
•
19
mlx-community/MiniMax-M2.5-8bit
Text Generation
•
229B
•
Updated
Feb 13
•
5.03k
•
2
DevQuasar/MiniMaxAI.MiniMax-M2.5-GGUF
Text Generation
•
229B
•
Updated
Feb 17
•
322
•
7
mlx-community/MiniMax-M2.5-4bit
Text Generation
•
229B
•
Updated
Feb 13
•
3.29k
•
1
inferencerlabs/MiniMax-M2.5-MLX-9bit
Text Generation
•
229B
•
Updated
Feb 13
•
672
•
4
lmstudio-community/MiniMax-M2.5-MLX-4bit
Text Generation
•
229B
•
Updated
Feb 13
•
56.3k
ubergarm/MiniMax-M2.5-GGUF
Text Generation
•
229B
•
Updated
Feb 15
•
1.03k
•
51
mlx-community/MiniMax-M2.5-5bit
Text Generation
•
229B
•
Updated
Feb 13
•
1.56k
•
1
mlx-community/MiniMax-M2.5-8bit-gs32
Text Generation
•
229B
•
Updated
Feb 13
•
591
•
2
marksverdhei/MiniMax-M2.5-GGUF
229B
•
Updated
Feb 13
•
347
•
3
lmstudio-community/MiniMax-M2.5-GGUF
229B
•
Updated
Feb 13
•
10.2k
•
4
lmstudio-community/MiniMax-M2.5-MLX-6bit
Text Generation
•
229B
•
Updated
Feb 13
•
54.9k
thad0ctor/MiniMax-M2.5-Q6_K-GGUF
229B
•
Updated
Feb 13
•
46
m-i/MiniMax-M2.5-mixed-2-8bit
Text Generation
•
229B
•
Updated
Feb 13
•
182
Previous
1
2
3
Next