The Model damienclere/FuseChat-Llama-3.2-3B-Instruct-4bit was converted to MLX format from FuseAI/FuseChat-Llama-3.2-3B-Instruct using mlx-lm version 0.21.1.
- Downloads last month
- 5
Model size
0.5B params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for damienclere/FuseChat-Llama-3.2-3B-Instruct-4bit-mlx
Base model
FuseAI/FuseChat-Llama-3.2-3B-Instruct