--- language: en license: apache-2.0 library_name: mlx pipeline_tag: text-generation base_model: chromadb/context-1 tags: - mlx - safetensors - gpt_oss - mixture-of-experts - 4bit - quantized - apple-silicon - text-generation - conversational - agentic - retrieval - search - tool-calling - lm-studio --- # Context-1 — MLX 4-bit MLX quantization of [chromadb/context-1](https://huggingface.co/chromadb/context-1) for Apple Silicon. - Converted with [mlx-lm](https://github.com/ml-explore/mlx-lm) version 0.31.2 - Also available: [context-1-MLX-6bit](https://huggingface.co/mlx-community/context-1-MLX-6bit) ## Key Specs | Detail | Value | |---|---| | Architecture | Mixture-of-Experts (MoE) Decoder-only Transformer | | Base Model | gpt-oss-20b | | Total Parameters | 20B | | Experts | 32 routed, 4 active per token | | Context Length | Up to 131,072 tokens | | Attention | Alternating sliding window (128 tokens) + full attention | | Quantization | 4-bit affine, group size 64 | | Original Precision | BF16 | | Disk Size | ~11 GB | | Peak Memory | ~12 GB | | Chat Template | Supported | ## What is Context-1? Context-1 is a **20B parameter agentic search model** designed to retrieve supporting documents for complex, multi-hop queries. It works as a retrieval subagent alongside frontier reasoning models. Key capabilities: - **Query decomposition** — breaks complex multi-constraint questions into targeted subqueries - **Parallel tool calling** — averages 2.56 tool calls per turn - **Self-editing context** — prunes irrelevant documents mid-search (0.94 prune accuracy) - **Cross-domain generalization** — trained on web, legal, and finance tasks Performance: comparable to frontier LLMs at a fraction of the cost, up to **10x faster inference**. ## Performance on Apple Silicon | Metric | Value | |---|---| | Prompt Processing | 227 tokens/sec | | Generation | 172 tokens/sec | | Peak Memory | 12 GB | ## Requirements - Apple Silicon Mac with 16GB+ unified memory - `mlx-lm >= 0.31.2` ```bash pip install mlx-lm ``` ## Usage ### CLI ```bash mlx_lm.generate \ --model mlx-community/context-1-MLX-4bit \ --prompt "Your prompt here" \ --max-tokens 256 ``` ### Python ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/context-1-MLX-4bit") response = generate(model, tokenizer, prompt="Your prompt here", max_tokens=256) print(response) ``` ### LM Studio This model is compatible with [LM Studio](https://lmstudio.ai) on Apple Silicon. Search for `context-1-MLX-4bit` in the model browser and download directly. ## Important: Agent Harness Context-1 is designed to work with a **specific agent harness** that manages tool execution, token budgets, context pruning, and deduplication. The harness is not yet publicly released by Chroma. Running the model without it will not reproduce the reported benchmark results. See the [technical report](https://trychroma.com/research/context-1) for details on the agent harness design. ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Credits - Base model by [Chroma](https://trychroma.com) - MLX quantization by [FF-01](https://huggingface.co/FF-01) ## Citation ```bibtex @techreport{bashir2026context1, title = {Chroma Context-1: Training a Self-Editing Search Agent}, author = {Bashir, Hammad and Hong, Kelly and Jiang, Patrick and Shi, Zhiyi}, year = {2026}, month = {March}, institution = {Chroma}, url = {https://trychroma.com/research/context-1}, } ```