Converted from mistralai/Ministral-3-3B-Instruct-2512-BF16 using mlx_lm.convert with --q-mode mxfp8 --q-group-size 32 on 2026-03-05, 21:51.
- Downloads last month
- 97
Model size
1B params
Tensor type
U8
路
U32 路
BF16 路
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for dumtjul/Ministral-3-3B-Instruct-2512-mlx-mxfp8
Base model
mistralai/Ministral-3-3B-Base-2512