Devstral Small 2 24B Instruct (MLX 3-bit)
MLX-optimized 3-bit quantization of mistralai/Devstral-Small-2-24B-Instruct-2512 for Apple Silicon Macs.
Model Details
- Base Model: Devstral Small 2 24B Instruct (December 2512 release)
- Architecture: Mistral3 (40 layers, GQA)
- Quantization: 3-bit affine, group size 64 (~3.5 bits/weight effective)
- Size: ~9.6 GB (vs ~48 GB BF16, ~12 GB 4-bit)
- Framework: MLX via mlx-lm
Key Features
- Native Tool Calling: Full
[AVAILABLE_TOOLS]/[TOOL_CALLS]/[TOOL_RESULTS]support - Agentic Coding: Designed for code generation, editing, and multi-step tool use
- 256K Context: Supports up to 256K token context window
- Speculative Decoding Compatible: Works with Mistral Tekken tokenizer draft models
Quantization Method
This model was created using a dequantize-requantize workflow for optimal 3-bit quality:
# Step 1: Dequantize the MLX 4-bit model back to BF16
mlx_lm.convert \
--hf-path mlx-community/mistralai_Devstral-Small-2-24B-Instruct-2512-MLX-4Bit \
--mlx-path ./devstral-v2-bf16 \
--dequantize --dtype bfloat16
# Step 2: Requantize from BF16 to 3-bit
mlx_lm.convert \
--hf-path ./devstral-v2-bf16 \
--mlx-path ./devstral-v2-3bit \
--quantize --q-bits 3 --q-group-size 64
Why this works: Direct 3-bit quantization from the original HuggingFace BF16 weights produces degenerate output ("decay decay decay") due to weight distribution differences during the PyTorch-to-MLX conversion. Going through the MLX 4-bit model's dequantized BF16 preserves the weight structure that MLX inference requires.
Usage
from mlx_lm import load, generate
model, tokenizer = load("badmadrad/Devstral-Small-2-24B-Instruct-2512-MLX-3bit")
messages = [{"role": "user", "content": "Write a Python function to sort a list."}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)
With Tool Calling
tools = [{"type": "function", "function": {"name": "read_file", "description": "Read a file", "parameters": {"type": "object", "properties": {"path": {"type": "string"}}, "required": ["path"]}}}]
messages = [{"role": "user", "content": "Read the file main.py"}]
prompt = tokenizer.apply_chat_template(messages, tools=tools, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=200)
# Output: [TOOL_CALLS]read_file[ARGS]{"path": "main.py"}
With Speculative Decoding
from mlx_lm import load
from mlx_lm.generate import speculative_generate_step
model, tokenizer = load("badmadrad/Devstral-Small-2-24B-Instruct-2512-MLX-3bit")
draft_model, _ = load("badmadrad/Mistral-Small-3.1-DRAFT-0.5B-MLX-4bit")
# Draft model proposes tokens, main model verifies — faster generation
Hardware Requirements
- Apple Silicon Mac (M1/M2/M3/M4)
- Minimum 16 GB unified memory
- macOS 13.5+
Comparison
| Variant | Size | Quality | Tool Calling |
|---|---|---|---|
| BF16 (original) | ~48 GB | Best | Yes |
| 4-bit (mlx-community) | ~12 GB | Great | Yes |
| 3-bit (this model) | ~9.6 GB | Good | Yes |
License
Apache 2.0 (same as base model)
- Downloads last month
- 69
Model size
24B params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to add your hardware
3-bit
Model tree for badmadrad/Devstral-Small-2-24B-Instruct-2512-MLX-3bit
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503