Converted from mistralai/Ministral-3-3B-Instruct-2512-BF16 using mlx_lm.convert with --q-mode mxfp8 --q-group-size 32 on 2026-03-05, 21:51.

Downloads last month
97
Safetensors
Model size
1B params
Tensor type
U8
U32
BF16
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for dumtjul/Ministral-3-3B-Instruct-2512-mlx-mxfp8