Gemma4-52B-A6B

gemma4-52b-a6b is a production MoE expansion of unsloth/Gemma-4-26B-A4B-it. It was built with a two-GPU local training setup and expanded with additional specialist capacity for software engineering, repository reasoning, agentic workflows, and general instruction following.

This is a full model checkpoint, not a LoRA adapter.

Model Details

  • Base model: unsloth/Gemma-4-26B-A4B-it
  • Architecture: Gemma4 MoE
  • Total logical parameters: approximately 50.6B
  • Active parameters: approximately 6B
  • Expert layout: 128 base experts + 128 added specialist experts
  • Native context length: 256k tokens
  • Validated local serve window: 131k tokens
  • Primary focus: SWE, code/repository analysis, agentic workflows, and reasoning
  • Recommended temperature: 0.5 to 0.7

Serving

Use the included Gemma4 chat template. Thinking mode is intended to be enabled.

CUDA_VISIBLE_DEVICES=0 \
vllm serve . \
  --served-model-name gemma4-52b-a6b \
  --host 0.0.0.0 \
  --port 23333 \
  --dtype bfloat16 \
  --tensor-parallel-size 1 \
  --max-model-len 131072 \
  --gpu-memory-utilization 0.88 \
  --trust-remote-code \
  --reasoning-parser gemma4 \
  --tool-call-parser gemma4 \
  --enable-auto-tool-choice \
  --chat-template ./chat_template.jinja \
  --default-chat-template-kwargs '{"enable_thinking": true}'

OpenAI-compatible endpoint:

http://localhost:23333/v1

Notes

This release is intended for production evaluation of the expanded Gemma4 MoE line. Feedback is welcome through the model discussion page or the usual Hugging Face repository feedback channels.

Downloads last month
-
Safetensors
Model size
28B params
Tensor type
F32
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support