Context-1 — MLX 4-bit

MLX quantization of chromadb/context-1 for Apple Silicon.

Key Specs

Detail Value
Architecture Mixture-of-Experts (MoE) Decoder-only Transformer
Base Model gpt-oss-20b
Total Parameters 20B
Experts 32 routed, 4 active per token
Context Length Up to 131,072 tokens
Attention Alternating sliding window (128 tokens) + full attention
Quantization 4-bit affine, group size 64
Original Precision BF16
Disk Size ~11 GB
Peak Memory ~12 GB
Chat Template Supported

What is Context-1?

Context-1 is a 20B parameter agentic search model designed to retrieve supporting documents for complex, multi-hop queries. It works as a retrieval subagent alongside frontier reasoning models.

Key capabilities:

  • Query decomposition — breaks complex multi-constraint questions into targeted subqueries
  • Parallel tool calling — averages 2.56 tool calls per turn
  • Self-editing context — prunes irrelevant documents mid-search (0.94 prune accuracy)
  • Cross-domain generalization — trained on web, legal, and finance tasks

Performance: comparable to frontier LLMs at a fraction of the cost, up to 10x faster inference.

Performance on Apple Silicon

Metric Value
Prompt Processing 227 tokens/sec
Generation 172 tokens/sec
Peak Memory 12 GB

Requirements

  • Apple Silicon Mac with 16GB+ unified memory
  • mlx-lm >= 0.31.2
pip install mlx-lm

Usage

CLI

mlx_lm.generate \
  --model mlx-community/context-1-MLX-4bit \
  --prompt "Your prompt here" \
  --max-tokens 256

Python

from mlx_lm import load, generate

model, tokenizer = load("mlx-community/context-1-MLX-4bit")
response = generate(model, tokenizer, prompt="Your prompt here", max_tokens=256)
print(response)

LM Studio

This model is compatible with LM Studio on Apple Silicon. Search for context-1-MLX-4bit in the model browser and download directly.

Important: Agent Harness

Context-1 is designed to work with a specific agent harness that manages tool execution, token budgets, context pruning, and deduplication. The harness is not yet publicly released by Chroma. Running the model without it will not reproduce the reported benchmark results.

See the technical report for details on the agent harness design.

License

Apache 2.0

Credits

Citation

@techreport{bashir2026context1,
  title = {Chroma Context-1: Training a Self-Editing Search Agent},
  author = {Bashir, Hammad and Hong, Kelly and Jiang, Patrick and Shi, Zhiyi},
  year = {2026},
  month = {March},
  institution = {Chroma},
  url = {https://trychroma.com/research/context-1},
}
Downloads last month
29
Safetensors
Model size
21B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/context-1-MLX-4bit

Quantized
(7)
this model