Status on Gemma 4?
Hi dear Gemma-Team,
Many people in the open source community are wondering if you have any plans in making Gemma 4, as it has been quite a long time since the release of Gemma 3, which was greatly appreciated. Of course, we will be waiting patiently, but a short status update whether or not Gemma 4 is training or planned at all would be very kind. People are really excited for a new potential release.
Thank you!
Hi @Dampfinchen ,
Thank you for your message and for the ongoing excitement around Gemma 4. We’re grateful for the strong interest and support from the community. For the most current updates, please keep an eye on the Gemma release notes and the Google DeepMind blog. Any official news regarding Gemma 4 release will be announced there first.
Thanks
Please make Gemma 4's knowledge cutoff August 2025
Any status updates? 👀
Hi All,
Thank you again for your patience and continued excitement around Gemma ,we truly appreciate the support from the open-source community. We’re happy to share that Gemma 4 has now been officially released! 🎉
This new generation comes in four versatile sizes: Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense. The entire family goes beyond simple chat, with strong capabilities in complex reasoning, logic, and agentic workflows.
We’re excited to see what you build with Gemma 4, and as always, we’d love your feedback!
Thanks again for being part of the journey.
Already building with it via Unsloth. Great job, folks! 🫡
Gemma 4 hasn't been officially announced with a public release date as far as I can tell from what's publicly available, but based on the cadence of Gemma 2 → Gemma 3, it's reasonable to expect something in the next few months. The jump from Gemma 2 to Gemma 3 brought meaningful architectural changes (interleaved attention, multimodality in the larger variants), so Gemma 4 will likely iterate further on efficiency at scale. The 27B parameter point is interesting because it sits at a sweet spot for local/on-premise deployment — I'd watch whether Google pushes the next generation toward better long-context handling or more aggressive quantization support, both of which matter a lot for production workloads.
One thing worth noting if you're building agentic pipelines on top of google/gemma-3-27b-it right now: the "it" (instruction-tuned) variants have gotten significantly more reliable for tool-use and structured output compared to Gemma 2, but there are still consistency issues when you're chaining multiple agents together. This is actually a broader problem in the ecosystem right now — the HN thread on MCP security is a good example of how agent frameworks are bolting on capability without thinking carefully about trust boundaries between agents. We've been working on this at AgentGraph, specifically around giving each agent a verifiable identity so that when gemma-3-27b-it is operating as a sub-agent, the orchestrator can actually verify what it's receiving came from the expected model instance with the expected config, not something that got substituted or tampered with mid-pipeline.
On the Gemma 4 question specifically: I'd also watch the technical report when it drops for any changes to the safety tuning methodology. Gemma 3's approach was fairly well documented, and if you're deploying in regulated environments, understanding how the RLHF/RLAIF pipeline changed between versions matters for compliance. The model card for this repo is reasonably detailed but the safety eval methodology section is still thinner than I'd like for production decision-making.
Guys that comment is ai
I know gemma 4 released
