fix(chat_template): Emit multimodal placeholders in tool response content-parts
#38 opened 3 days ago
by
harshaljanjani
GGUF available — Cerebellum v3 (11 GB, ablation-guided mixed-precision)
#37 opened 3 days ago
by
deucebucket
add newlines and thinking tokens to template to avoid having to compute 3 extra tokens per generation in chat completion+reasoning
👍 2
#35 opened 6 days ago
by
quasar-of-mikus
Very bad results with model quant and KV cache quant, only BF16 works well
👍👀 3
#34 opened 7 days ago
by
qenme
Fix chat_template: emit empty <|channel>thought\n<channel|> wrapper for existing asst turns
#33 opened 10 days ago
by
flotherxi
[Bug] chat_template: missing <|channel>thought\n<channel|> wrapper for non-thinking SFT / multi-turn
#32 opened 10 days ago
by
flotherxi
Release the 124B parent weights... We know you have it.
#31 opened 13 days ago
by
Dureka
tfhe_ntt::prime32::Plan::try_new
#30 opened 15 days ago
by
milezdeep13
Add ParseBench evaluation results
#29 opened 17 days ago
by
boyang-runllama
Thinking Mode doesn't work properly on gemma-4-26B-A4B-it.
#27 opened 18 days ago
by
michaelkopf1981
Fix missing thinking channel in Gemma 4 chat template when using continue_final_message
#26 opened 18 days ago
by
CalinR
Your 260k dictionary is breaking Gemma 4's back.
8
#25 opened 20 days ago
by
phil111
fix: embed chat_template in tokenizer_config.json
#24 opened 20 days ago
by
NERDDISCO
fix: function calling formatting in chat template
👍 1
4
#20 opened 24 days ago
by
RyanMullins
Vertex AI & vLLM Deployment Guide for Gemma 4 26B-A4B-it (MoE) + Known Limitations
🚀❤️ 3
1
#19 opened 25 days ago
by
Manzela-D
Thank you Google!
2
#18 opened 26 days ago
by
KngRnZ
[Appreciation] Incredible performance of Gemma 4-26b on consumer hardware — 90 t/s even on an older DDR3 system!
2
#17 opened 26 days ago
by
MightyLoraLord
Excellent release, Google. Gemma 4 is good.
❤️ 3
2
#16 opened 27 days ago
by
DorkMckork1
Fantastic release!
👍 7
4
#15 opened 28 days ago
by
Dampfinchen
Significant Otter! ❤
🔥 3
2
#14 opened 29 days ago
by
MrDevolver
THANK YOU! Google
❤️👍 7
1
#13 opened 30 days ago
by
E7Reine
Are you guys going to add other MoE stuff?
🤗 3
2
#11 opened about 1 month ago
by
Nesy1
Verified Commit?
2
#9 opened about 1 month ago
by
stephenrawls
please fp8
2
#8 opened about 1 month ago
by
huang123chuan
First community NVFP4 quantization of Gemma 4 26B-A4B-it (49GB → 16.5GB)
👍 2
2
#7 opened about 1 month ago
by
marioiseli
Add AIME 2026 evaluation result
#5 opened about 1 month ago
by
SaylorTwift
add eval results
#3 opened about 1 month ago
by
merve
error when batch size >1
#1 opened about 2 months ago
by
loulou2