exllamav3 quantizations of MiniMaxAI/MiniMax-M2.5.

2.00 bpw h6 (Quantizing)
3.00 bpw h6 81.613 GiB (Uploading)
4.00 bpw h6 (Quantizing)
5.00 bpw h6 (Quantizing)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MikeRoz/MiniMax-M2.5-exl3

Quantized
(27)
this model