MiniMax M2 architecture quantized with DeepSeek-like FP8 quantization scheme.

Downloads last month
147
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support