Phi-4-mini-instruct (MLX 8-bit)
This is an 8-bit MLX quantized version of microsoft/Phi-4-mini-instruct, offering higher quality output at the cost of increased memory usage.
Benchmark Environment
- Device: MacBook Pro (M3 Pro)
- Runtime: MLX
- Precision: 8-bit (~8.5 bits per weight)
Performance (Measured)
- Disk size: ~3.8 GB
- Peak memory: ~4.15 GB
- Generation speed: ~32 tokens/sec
Benchmarks were collected on macOS (M3 Pro).
iPhone / iPad performance will vary depending on hardware and memory.
Usage
mlx_lm.generate \
--model Irfanuruchi/Phi-4-mini-instruct-MLX-8bit \
--prompt "Write a 1-paragraph plan for learning Spanish in 30 days." \
--max-tokens 160
License
Original model license applies. See microsoft/Phi-4-mini-instruct.
- Downloads last month
- 75
Model size
1B params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to add your hardware
8-bit
Model tree for Irfanuruchi/Phi-4-mini-instruct-MLX-8bit
Base model
microsoft/Phi-4-mini-instruct