Gemma4-52B-A6B Canopy
gemma4-52b-a6b-canopy is an alternate production-evaluation build in the
Gemma4-52B-A6B line. It uses the same expanded Gemma4 MoE backbone as the
primary release, with an alternate evaluation profile for SWE and agentic
workloads.
This is a full model checkpoint, not a LoRA adapter.
Model Details
- Base model:
unsloth/Gemma-4-26B-A4B-it - Architecture: Gemma4 MoE
- Total logical parameters: approximately
50.6B - Active parameters: approximately
6B - Expert layout:
128base experts +128added specialist experts - Native context length:
256ktokens - Validated local serve window:
131ktokens - Primary focus: SWE, code/repository analysis, agentic workflows, and reasoning
- Recommended temperature:
0.5to0.7
Serving
Use the included Gemma4 chat template. Thinking mode is intended to be enabled.
CUDA_VISIBLE_DEVICES=0 \
vllm serve . \
--served-model-name gemma4-52b-a6b-canopy \
--host 0.0.0.0 \
--port 23333 \
--dtype bfloat16 \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--gpu-memory-utilization 0.88 \
--trust-remote-code \
--reasoning-parser gemma4 \
--tool-call-parser gemma4 \
--enable-auto-tool-choice \
--chat-template ./chat_template.jinja \
--default-chat-template-kwargs '{"enable_thinking": true}'
OpenAI-compatible endpoint:
http://localhost:23333/v1
Notes
Canopy is best treated as an evaluation sibling of the primary
gemma4-52b-a6b release rather than a replacement.
Feedback is welcome. If you notice specific strengths, failures, or behavioral differences versus the primary build, please share them through the model discussion page or the usual Hugging Face repository feedback channels.
- Downloads last month
- -