Instructions to use plawanrath/phi-3.5-mini-instruct-q4-mlx-cba with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use plawanrath/phi-3.5-mini-instruct-q4-mlx-cba with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("plawanrath/phi-3.5-mini-instruct-q4-mlx-cba") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use plawanrath/phi-3.5-mini-instruct-q4-mlx-cba with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "plawanrath/phi-3.5-mini-instruct-q4-mlx-cba"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "plawanrath/phi-3.5-mini-instruct-q4-mlx-cba" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "plawanrath/phi-3.5-mini-instruct-q4-mlx-cba", "messages": [ {"role": "user", "content": "Hello"} ] }'
phi-3.5-mini-instruct-q4 (MLX, CBA artifact)
MLX-format 4-bit (Q4) variant of microsoft/Phi-3.5-mini-instruct.
This is one of the 15 model artifacts from the paper:
Quantization Undoes Alignment: Bias Emergence in Compressed LLMs Across Models and Precision Levels Plawan Kumar Rath, Rahul Maliakkal. IEEE Cloud Summit 2026. Code: https://github.com/plawanrath/compression-bias-amplification
Quantization
Weight-only post-training quantization via mlx_lm.convert:
- bits: 4
- group_size: 64
- mode: affine
How this artifact was produced
python -m mlx_lm.convert \
--hf-path microsoft/Phi-3.5-mini-instruct \
--mlx-path ./phi-3.5-mini-instruct-q4 \
--quantize \
--q-bits 4 \
--q-group-size 64
This is the exact artifact used to produce the inference results in ยง4.3 of the paper (911,100 records over BBQ ambiguous, 5 seeds ร 12,148 items ร 15 configs).
Usage (MLX)
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("plawanrath/phi-3.5-mini-instruct-q4-mlx-cba")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "Hello!"}],
add_generation_prompt=True,
tokenize=False,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=128))
Or via CLI:
mlx_lm.generate --model plawanrath/phi-3.5-mini-instruct-q4-mlx-cba --prompt "Hello!"
Paper findings relevant to this variant
The paper documents a dose-response relationship between quantization aggressiveness and emergent stereotypical behavior on BBQ ambiguous questions:
| Variant | % of BF16-unbiased items that became biased |
|---|---|
| Q8 | 0.1โ0.9% |
| Q6 | 0.3โ1.3% |
| Q4 | 2.2โ5.6% |
| Q3 | 6.0โ21.1% |
These changes are largely invisible to perplexity (<0.5% shift at Q8, <3% at Q4 across all three families). Treat any deployment of compressed instruction-tuned models on fairness-sensitive tasks accordingly.
Model details
- Base model:
microsoft/Phi-3.5-mini-instruct - Family: Phi-3
- Parameters: 3.8B
- Precision: 4-bit (Q4)
- Format: MLX (Apple Silicon)
- Conversion framework:
mlx-lm
License
Inherited from the base model (mit). See the upstream model page for the full license text.
Citation
@inproceedings{rath2026quantization,
title = { Quantization Undoes Alignment: Bias Emergence in Compressed LLMs Across Models and Precision Levels },
author = {Rath, Plawan Kumar and Maliakkal, Rahul},
booktitle = { IEEE Cloud Summit 2026 },
year = {2026}
}
- Downloads last month
- 94
4-bit
Model tree for plawanrath/phi-3.5-mini-instruct-q4-mlx-cba
Base model
microsoft/Phi-3.5-mini-instruct