Part of the Outlier shipping lineup. Outlier is a free macOS app that runs this model locally, with one click. Apple Silicon only.

Outlier Code 27B (MLX 4-bit)

Code-tuned configuration of the Core 27B weights — same safetensors, different chat template, lower temperature, and code-specialized system prompt. Use this if your primary workflow is code generation or repo-aware editing.

Try it in Outlier

The simplest way to use this model is through the Outlier app — open the tier picker, select Outlier Code, click download, and chat. No setup, no Python, no MLX install, no token quotas.

Download Outlier — outlier.host

A screenshot of the tier picker is at outlier.host/screenshots/tier-picker.png.

Load this directly (power users)

If you want the raw MLX-4bit weights without the app:

pip install mlx-lm
python -m mlx_lm.generate \
  --model Outlier-Ai/Outlier-Code-27B-MLX-4bit \
  --prompt "Write a quicksort in Python." \
  --max-tokens 512
from mlx_lm import load, generate
model, tokenizer = load("Outlier-Ai/Outlier-Code-27B-MLX-4bit")
print(generate(model, tokenizer, prompt="Hello", max_tokens=256))

Verified benchmarks

For σ-qualified MMLU, HumanEval, and Mac inference-speed numbers — with full provenance (source file, command, n, stderr, date) — see outlier.host/benchmarks.

Other Outlier shipping tiers

License

Apache 2.0 (inherits from upstream base model). Conversion artifact only — the underlying weights are governed by the base model's license.

Downloads last month
75
Safetensors
Model size
4B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Outlier-Ai/Outlier-Code-27B-MLX-4bit

Base model

Qwen/Qwen3.6-27B
Quantized
(256)
this model

Evaluation results