Text Generation
MLX
Safetensors
English
Chinese
qwen2
Merge
mlx-my-repo
conversational
4-bit precision
How to use from
PiConfigure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent# Add to ~/.pi/agent/models.json:
{
"providers": {
"mlx-lm": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "mlx-community/QwQ-Coder-instruct-mlx-4Bit"
}
]
}
}
}Run Pi
# Start Pi in your project directory:
piQuick Links
A newer version of this model is available: YOYO-AI/YOYO-O1-32B-V4-preview4
bobig/QwQ-Coder-instruct-mlx-4Bit
This is pretty good. QwQ brains and memory + Qwen code instruct
Now in delicious MLX, eat it or wear it
32k context is solid in QwQ: https://fiction.live/stories/Fiction-liveBench-Mar-14-2025/oQdzQvKHw8JyXbN87
Test Prompt: Write a quick sort in C++
The Model bobig/QwQ-Coder-instruct-mlx-4Bit was converted to MLX format from YOYO-AI/QwQ-Coder-instruct using mlx-lm version 0.21.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("bobig/QwQ-Coder-instruct-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 7
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for mlx-community/QwQ-Coder-instruct-mlx-4Bit
Base model
YOYO-AI/QwQ-Coder-instruct
Start the MLX server
# Install MLX LM: uv tool install mlx-lm# Start a local OpenAI-compatible server: mlx_lm.server --model "mlx-community/QwQ-Coder-instruct-mlx-4Bit"