Gemma-4-Multi / README.md
SeaWolf-AI's picture
Update README.md
0ec7fa3 verified

A newer version of the Gradio SDK is available: 6.11.0

Upgrade
metadata
title: Gemma-4 Multichat
emoji: πŸ‘€
colorFrom: blue
colorTo: yellow
sdk: gradio
sdk_version: 6.10.0
app_file: app.py
pinned: false
license: apache-2.0
short_description: Gemma 4 β€” MoE 26B or Dense 31B, Vision, Thinking
hf_oauth: true
hf_oauth_scopes:
  - email

πŸ’Ž Gemma 4 Playground β€” Dual Model Demo on ZeroGPU We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models β€” directly on Hugging Face Spaces with ZeroGPU. πŸ‘‰ Try it now: FINAL-Bench/Gemma-4-Multi Two Models, One Space Switch between both Gemma 4 variants in a single interface:

⚑ Gemma 4 26B-A4B β€” MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%. πŸ† Gemma 4 31B β€” Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.

Features

Vision β€” Upload images for analysis, OCR, chart reading, document parsing Thinking Mode β€” Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens System Prompts β€” 6 presets (General, Code, Math, Creative, Translate, Research) or write your own Streaming β€” Real-time token-by-token response via ZeroGPU Apache 2.0 β€” Fully open, no restrictions

Technical Details Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces.GPU β€” no dedicated GPU needed. Both models support 256K context window and 140+ languages out of the box.

Links

Built by VIDRAFT 🧬