SmolLM2-360M Instruct (MLX, 4-bit)
This is an MLX conversion of HuggingFaceTB/SmolLM2-360M-Instruct quantized to 4-bit for fast on-device inference on Apple Silicon.
Quickstart
Install:
pip install -U mlx-lm
Run:
mlx_lm.generate \
--model Irfanuruchi/SmolLM2-360M-Instruct-MLX-4bit \
--prompt "Reply with exactly 3 bullet points, 4–8 words each: what can you do offline?" \
--max-tokens 80
Benchmarks (MacBook Pro M3 Pro)
- Disk: 198 MB
- Peak RAM: 0.247 GB
Performance will vary across devices and prompts.
Notes
- Converted/quantized with
mlx_lm.convert. - This repo contains MLX weights and tokenizer/config files.
License & attribution
Upstream model: HuggingFaceTB/SmolLM2-360M-Instruct (Apache-2.0).
Please follow the upstream license and attribution requirements.
- Downloads last month
- 13
Model size
56.6M params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for Irfanuruchi/SmolLM2-360M-Instruct-MLX-4bit
Base model
HuggingFaceTB/SmolLM2-360M
Quantized
HuggingFaceTB/SmolLM2-360M-Instruct