Gemma4-52B-A6B Canopy

gemma4-52b-a6b-canopy is an alternate production-evaluation build in the Gemma4-52B-A6B line. It uses the same expanded Gemma4 MoE backbone as the primary release, with an alternate evaluation profile for SWE and agentic workloads.

This is a full model checkpoint, not a LoRA adapter.

Model Details

  • Base model: unsloth/Gemma-4-26B-A4B-it
  • Architecture: Gemma4 MoE
  • Total logical parameters: approximately 50.6B
  • Active parameters: approximately 6B
  • Expert layout: 128 base experts + 128 added specialist experts
  • Native context length: 256k tokens
  • Validated local serve window: 131k tokens
  • Primary focus: SWE, code/repository analysis, agentic workflows, and reasoning
  • Recommended temperature: 0.5 to 0.7

Serving

Use the included Gemma4 chat template. Thinking mode is intended to be enabled.

CUDA_VISIBLE_DEVICES=0 \
vllm serve . \
  --served-model-name gemma4-52b-a6b-canopy \
  --host 0.0.0.0 \
  --port 23333 \
  --dtype bfloat16 \
  --tensor-parallel-size 1 \
  --max-model-len 131072 \
  --gpu-memory-utilization 0.88 \
  --trust-remote-code \
  --reasoning-parser gemma4 \
  --tool-call-parser gemma4 \
  --enable-auto-tool-choice \
  --chat-template ./chat_template.jinja \
  --default-chat-template-kwargs '{"enable_thinking": true}'

OpenAI-compatible endpoint:

http://localhost:23333/v1

Notes

Canopy is best treated as an evaluation sibling of the primary gemma4-52b-a6b release rather than a replacement.

Feedback is welcome. If you notice specific strengths, failures, or behavioral differences versus the primary build, please share them through the model discussion page or the usual Hugging Face repository feedback channels.

Downloads last month
-
Safetensors
Model size
28B params
Tensor type
F32
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support