Irfanuruchi's picture
SmolLM2 360M mlx format, quantized to 4 bits
5568cb6 verified
metadata
library_name: mlx
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
tags:
  - safetensors
  - onnx
  - transformers.js
  - mlx
base_model: HuggingFaceTB/SmolLM2-360M-Instruct

SmolLM2-360M Instruct (MLX, 4-bit)

This is an MLX conversion of HuggingFaceTB/SmolLM2-360M-Instruct quantized to 4-bit for fast on-device inference on Apple Silicon.

Quickstart

Install:

pip install -U mlx-lm

Run:

mlx_lm.generate \
  --model Irfanuruchi/SmolLM2-360M-Instruct-MLX-4bit \
  --prompt "Reply with exactly 3 bullet points, 4–8 words each: what can you do offline?" \
  --max-tokens 80

Benchmarks (MacBook Pro M3 Pro)

  • Disk: 198 MB
  • Peak RAM: 0.247 GB

Performance will vary across devices and prompts.

Notes

  • Converted/quantized with mlx_lm.convert.
  • This repo contains MLX weights and tokenizer/config files.

License & attribution

Upstream model: HuggingFaceTB/SmolLM2-360M-Instruct (Apache-2.0).
Please follow the upstream license and attribution requirements.