SeaWolf-AI commited on
Commit
0ec7fa3
Β·
verified Β·
1 Parent(s): 12c11cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -24
README.md CHANGED
@@ -13,38 +13,33 @@ hf_oauth: true
13
  hf_oauth_scopes:
14
  - email
15
  ---
16
- ## πŸ’Ž Gemma 4 Playground β€” Dual Model Demo on ZeroGPU
17
-
18
- We just launched a **Gemma 4 Playground** that lets you chat with Google DeepMind's latest open models β€” directly on Hugging Face Spaces with ZeroGPU.
19
-
20
- **πŸ‘‰ Try it now:** [FINAL-Bench/Gemma-4-Multi](https://huggingface.co/spaces/FINAL-Bench/Gemma-4-Multi)
21
-
22
- ### Two Models, One Space
23
 
 
 
 
 
24
  Switch between both Gemma 4 variants in a single interface:
25
 
26
- - **⚑ Gemma 4 26B-A4B** β€” MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%.
27
- - **πŸ† Gemma 4 31B** β€” Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.
28
-
29
- ### Features
30
-
31
- - **Vision** β€” Upload images for analysis, OCR, chart reading, document parsing
32
- - **Thinking Mode** β€” Toggle chain-of-thought reasoning with Gemma 4's native `<|channel>` thinking tokens
33
- - **System Prompts** β€” 6 presets (General, Code, Math, Creative, Translate, Research) or write your own
34
- - **Streaming** β€” Real-time token-by-token response via ZeroGPU
35
- - **Apache 2.0** β€” Fully open, no restrictions
36
 
37
- ### Technical Details
38
 
39
- Built with the dev build of `transformers` (5.5.0.dev0) for full Gemma 4 support including multimodal `apply_chat_template`, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with `@spaces.GPU` β€” no dedicated GPU needed.
 
 
 
 
40
 
 
 
41
  Both models support 256K context window and 140+ languages out of the box.
42
 
43
  ### Links
44
 
45
- - πŸ€— **Space**: [FINAL-Bench/Gemma-4-Multi](https://huggingface.co/spaces/FINAL-Bench/Gemma-4-Multi)
46
- - πŸ“„ **Gemma 4 26B-A4B**: [google/gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it)
47
- - πŸ“„ **Gemma 4 31B**: [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it)
48
- - πŸ”¬ **DeepMind Blog**: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)
49
 
50
- Built by **VIDRAFT** 🧬
 
13
  hf_oauth_scopes:
14
  - email
15
  ---
 
 
 
 
 
 
 
16
 
17
+ πŸ’Ž Gemma 4 Playground β€” Dual Model Demo on ZeroGPU
18
+ We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models β€” directly on Hugging Face Spaces with ZeroGPU.
19
+ πŸ‘‰ Try it now: FINAL-Bench/Gemma-4-Multi
20
+ Two Models, One Space
21
  Switch between both Gemma 4 variants in a single interface:
22
 
23
+ ⚑ Gemma 4 26B-A4B β€” MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%.
24
+ πŸ† Gemma 4 31B β€” Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.
 
 
 
 
 
 
 
 
25
 
26
+ Features
27
 
28
+ Vision β€” Upload images for analysis, OCR, chart reading, document parsing
29
+ Thinking Mode β€” Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens
30
+ System Prompts β€” 6 presets (General, Code, Math, Creative, Translate, Research) or write your own
31
+ Streaming β€” Real-time token-by-token response via ZeroGPU
32
+ Apache 2.0 β€” Fully open, no restrictions
33
 
34
+ Technical Details
35
+ Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces.GPU β€” no dedicated GPU needed.
36
  Both models support 256K context window and 140+ languages out of the box.
37
 
38
  ### Links
39
 
40
+ - πŸ€— Space: [FINAL-Bench/Gemma-4-Multi](https://huggingface.co/spaces/FINAL-Bench/Gemma-4-Multi)
41
+ - πŸ“„ Gemma 4 26B-A4B: [google/gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it)
42
+ - πŸ“„ Gemma 4 31B: [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it)
43
+ - πŸ”¬ DeepMind Blog: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)
44
 
45
+ Built by VIDRAFT 🧬