QwQ-32B-MLX-Q5
QwQ-32B-MLX-Q5 is an MLX Q5 checkpoint derived from Qwen/QwQ-32B, intended for local text generation on Apple Silicon.
Intended use
- Local text generation and chat-style prompting on Apple Silicon
- MLX-LM experimentation with the declared upstream model family
- Offline or operator-controlled inference workflows
Out of scope
- Safety-critical decisions without domain expert review
- Claims of benchmark superiority not backed by published evaluation data
- Non-MLX runtime guarantees; this card documents the shipped HF checkpoint, not every possible serving stack
Training and conversion metadata
| Parameter | Value |
|---|---|
| Repository | LibraxisAI/QwQ-32B-MLX-Q5 |
| Base model | Qwen/QwQ-32B |
| Task | text-generation |
| Library | mlx |
| Format | MLX / Apple Silicon checkpoint |
| Quantization | Q5 |
| Architecture | Qwen2ForCausalLM |
| Model files | 5 |
| Config model_type | qwen2 |
This card only reports metadata present in the Hugging Face repository, existing card frontmatter, or public config files. Missing benchmark, dataset, or training-run details are left explicit rather than reconstructed.
Tested inference path
**Inference for this checkpoint has been tested with
LibraxisAI/mlx-batch-server.**
This is the recommended tested path for operator-controlled local inference on Apple Silicon.
| Aspect | Status |
|---|---|
| Tested runtime | LibraxisAI/mlx-batch-server |
| Target hardware | Apple Silicon |
| Inference mode | Local / self-hosted |
| Hugging Face Hosted Inference | Disabled for this repository (inference: false) |
This does not claim compatibility with every possible serving stack. It documents the path that has been exercised for this published checkpoint.
Usage
CLI
pip install mlx-lm
mlx_lm.generate \
--model LibraxisAI/QwQ-32B-MLX-Q5 \
--prompt "Summarize the key signals in this document and list the next action items." \
--max-tokens 400
Python
from mlx_lm import load, generate
model, tokenizer = load("LibraxisAI/QwQ-32B-MLX-Q5")
prompt = "Summarize the key signals in this document and list the next action items."
response = generate(model, tokenizer, prompt=prompt, max_tokens=400)
print(response)
Multi-turn with the chat template
This checkpoint follows the tokenizer/chat-template contract inherited from Qwen/QwQ-32B when the
template is present in the repository:
from mlx_lm import load, generate
model, tokenizer = load("LibraxisAI/QwQ-32B-MLX-Q5")
messages = [
{"role": "user", "content": "Summarize the key signals in this document and list the next action items."},
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
response = generate(model, tokenizer, prompt=prompt, max_tokens=400)
print(response)
Example output
No public sample output is currently declared for this checkpoint.
Quantization notes
| Aspect | Original/base checkpoint | This checkpoint |
|---|---|---|
| Lineage | Qwen/QwQ-32B |
LibraxisAI/QwQ-32B-MLX-Q5 |
| Runtime target | Upstream runtime format | MLX on Apple Silicon |
| Quantization | Base precision or upstream-declared format | Q5 |
| Published quality delta | Not declared in public metadata | Not declared in public metadata |
Limitations
- No public benchmarks for this checkpoint are declared in the model metadata.
- No public benchmark claims are made by this card unless listed in the frontmatter.
- Validate outputs on your own domain data before relying on this checkpoint.
- Memory use and speed depend heavily on the exact Apple Silicon generation, unified-memory size, and prompt length.
License
apache-2.0. Check the upstream/base model license as well when a base model is declared.
Citation
@misc{libraxisai-qwq-32b-mlx-q5,
title = {QwQ-32B-MLX-Q5},
author = {LibraxisAI},
year = {2026},
howpublished = {\url{https://huggingface.co/LibraxisAI/QwQ-32B-MLX-Q5}},
note = {MLX checkpoint published by LibraxisAI}
}
π ππππππππππ. with AI Agents by VetCoders (c)2024-2026 LibraxisAI
- Downloads last month
- 256
5-bit