gaunernst commited on
Commit
36e3234
·
1 Parent(s): feb0822
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ extra_gated_heading: Access Gemma on Hugging Face
6
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
7
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
8
+ Face and click below. Requests are processed immediately.
9
+ extra_gated_button_content: Acknowledge license
10
+ base_model: google/gemma-3-1b-it
11
+ ---
12
+
13
+ # Gemma 3 1B Instruction-tuned INT4
14
+
15
+ This is the QAT INT4 Flax checkpoint (from Kaggle) converted to HF+AWQ format for ease of use. AWQ was NOT used for quantization. You can find the conversion script `convert_flax.py` in this model repo.
16
+
17
+ Below is the original Model card from https://huggingface.co/google/gemma-3-1b-it
18
+
19
+ # Gemma 3 model card
20
+
21
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
22
+
23
+ **Resources and Technical Documentation**:
24
+
25
+ * [Gemma 3 Technical Report][g3-tech-report]
26
+ * [Responsible Generative AI Toolkit][rai-toolkit]
27
+ * [Gemma on Kaggle][kaggle-gemma]
28
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma3]
29
+
30
+ **Terms of Use**: [Terms][terms]
31
+
32
+ **Authors**: Google DeepMind
33
+
34
+ ## Model Information
35
+
36
+ Summary description and brief definition of inputs and outputs.
37
+
38
+ ### Description
39
+
40
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
41
+ built from the same research and technology used to create the Gemini models.
42
+ Gemma 3 models are multimodal, handling text and image input and generating text
43
+ output, with open weights for both pre-trained variants and instruction-tuned
44
+ variants. Gemma 3 has a large, 128K context window, multilingual support in over
45
+ 140 languages, and is available in more sizes than previous versions. Gemma 3
46
+ models are well-suited for a variety of text generation and image understanding
47
+ tasks, including question answering, summarization, and reasoning. Their
48
+ relatively small size makes it possible to deploy them in environments with
49
+ limited resources such as laptops, desktops or your own cloud infrastructure,
50
+ democratizing access to state of the art AI models and helping foster innovation
51
+ for everyone.
52
+
53
+ ### Inputs and outputs
54
+
55
+ - **Input:**
56
+ - Text string, such as a question, a prompt, or a document to be summarized
57
+ - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
58
+ each
59
+ - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
60
+ 32K tokens for the 1B size
61
+
62
+ - **Output:**
63
+ - Generated text in response to the input, such as an answer to a
64
+ question, analysis of image content, or a summary of a document
65
+ - Total output context of 8192 tokens
66
+
67
+ ### Usage
68
+
69
+ Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library with the version made for Gemma 3:
70
+
71
+ ```sh
72
+
73
+ $ pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3
74
+
75
+ ```
76
+
77
+ Then, copy the snippet from the section that is relevant for your use case.
78
+
79
+ #### Running with the `pipeline` API
80
+
81
+ With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
82
+
83
+ ```python
84
+ from transformers import pipeline
85
+
86
+ pipe = pipeline("text-generation", model="google/gemma-3-1b-it", device="cuda", torch_dtype=torch.bfloat16)
87
+
88
+ messages = [
89
+ [
90
+ {
91
+ "role": "system",
92
+ "content": [{"type": "text", "text": "You are a helpful assistant."},]
93
+ },
94
+ {
95
+ "role": "user",
96
+ "content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},]
97
+ },
98
+ ],
99
+ ]
100
+
101
+ output = pipe(messages, max_new_tokens=50)
102
+ ```
103
+
104
+ #### Running the model on a single / multi GPU
105
+
106
+ ```python
107
+ from transformers import AutoTokenizer, BitsAndBytesConfig, Gemma3ForCausalLM
108
+ import torch
109
+
110
+ model_id = "google/gemma-3-1b-it"
111
+
112
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
113
+
114
+ model = Gemma3ForCausalLM.from_pretrained(
115
+ model_id, quantization_config=quantization_config
116
+ ).eval()
117
+
118
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
119
+
120
+ messages = [
121
+ [
122
+ {
123
+ "role": "system",
124
+ "content": [{"type": "text", "text": "You are a helpful assistant."},]
125
+ },
126
+ {
127
+ "role": "user",
128
+ "content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},]
129
+ },
130
+ ],
131
+ ]
132
+ inputs = tokenizer.apply_chat_template(
133
+ messages,
134
+ add_generation_prompt=True,
135
+ tokenize=True,
136
+ return_dict=True,
137
+ return_tensors="pt",
138
+ ).to(model.device).to(torch.bfloat16)
139
+
140
+
141
+ with torch.inference_mode():
142
+ outputs = model.generate(**inputs, max_new_tokens=64)
143
+
144
+ outputs = tokenizer.batch_decode(outputs)
145
+ ```
146
+
147
+
148
+ ### Citation
149
+
150
+ ```none
151
+ @article{gemma_2025,
152
+ title={Gemma 3},
153
+ url={https://goo.gle/Gemma3Report},
154
+ publisher={Kaggle},
155
+ author={Gemma Team},
156
+ year={2025}
157
+ }
158
+ ```
159
+
160
+ ## Model Data
161
+
162
+ Data used for model training and how the data was processed.
163
+
164
+ ### Training Dataset
165
+
166
+ These models were trained on a dataset of text data that includes a wide variety
167
+ of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
168
+ trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
169
+ 1B with 2 trillion tokens. Here are the key components:
170
+
171
+ - Web Documents: A diverse collection of web text ensures the model is
172
+ exposed to a broad range of linguistic styles, topics, and vocabulary. The
173
+ training dataset includes content in over 140 languages.
174
+ - Code: Exposing the model to code helps it to learn the syntax and
175
+ patterns of programming languages, which improves its ability to generate
176
+ code and understand code-related questions.
177
+ - Mathematics: Training on mathematical text helps the model learn logical
178
+ reasoning, symbolic representation, and to address mathematical queries.
179
+ - Images: A wide range of images enables the model to perform image
180
+ analysis and visual data extraction tasks.
181
+
182
+ The combination of these diverse data sources is crucial for training a powerful
183
+ multimodal model that can handle a wide variety of different tasks and data
184
+ formats.
185
+
186
+ ### Data Preprocessing
187
+
188
+ Here are the key data cleaning and filtering methods applied to the training
189
+ data:
190
+
191
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
192
+ was applied at multiple stages in the data preparation process to ensure
193
+ the exclusion of harmful and illegal content.
194
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
195
+ safe and reliable, automated techniques were used to filter out certain
196
+ personal information and other sensitive data from training sets.
197
+ - Additional methods: Filtering based on content quality and safety in
198
+ line with [our policies][safety-policies].
199
+
200
+ ## Implementation Information
201
+
202
+ Details about the model internals.
203
+
204
+ ### Hardware
205
+
206
+ Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
207
+ TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
208
+ computational power. TPUs, designed specifically for matrix operations common in
209
+ machine learning, offer several advantages in this domain:
210
+
211
+ - Performance: TPUs are specifically designed to handle the massive
212
+ computations involved in training VLMs. They can speed up training
213
+ considerably compared to CPUs.
214
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
215
+ allowing for the handling of large models and batch sizes during training.
216
+ This can lead to better model quality.
217
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
218
+ solution for handling the growing complexity of large foundation models.
219
+ You can distribute training across multiple TPU devices for faster and more
220
+ efficient processing.
221
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
222
+ cost-effective solution for training large models compared to CPU-based
223
+ infrastructure, especially when considering the time and resources saved
224
+ due to faster training.
225
+ - These advantages are aligned with
226
+ [Google's commitments to operate sustainably][sustainability].
227
+
228
+ ### Software
229
+
230
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
231
+
232
+ JAX allows researchers to take advantage of the latest generation of hardware,
233
+ including TPUs, for faster and more efficient training of large models. ML
234
+ Pathways is Google's latest effort to build artificially intelligent systems
235
+ capable of generalizing across multiple tasks. This is specially suitable for
236
+ foundation models, including large language models like these ones.
237
+
238
+ Together, JAX and ML Pathways are used as described in the
239
+ [paper about the Gemini family of models][gemini-2-paper]; *"the 'single
240
+ controller' programming model of Jax and Pathways allows a single Python
241
+ process to orchestrate the entire training run, dramatically simplifying the
242
+ development workflow."*
243
+
244
+ ## Evaluation
245
+
246
+ Model evaluation metrics and results.
247
+
248
+ ### Benchmark Results
249
+
250
+ These models were evaluated against a large collection of different datasets and
251
+ metrics to cover different aspects of text generation:
252
+
253
+ #### Reasoning and factuality
254
+
255
+ | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
256
+ | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
257
+ | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
258
+ | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
259
+ | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
260
+ | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
261
+ | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
262
+ | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
263
+ | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
264
+ | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
265
+ | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
266
+ | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
267
+ | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
268
+
269
+ [hellaswag]: https://arxiv.org/abs/1905.07830
270
+ [boolq]: https://arxiv.org/abs/1905.10044
271
+ [piqa]: https://arxiv.org/abs/1911.11641
272
+ [socialiqa]: https://arxiv.org/abs/1904.09728
273
+ [triviaqa]: https://arxiv.org/abs/1705.03551
274
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
275
+ [arc]: https://arxiv.org/abs/1911.01547
276
+ [winogrande]: https://arxiv.org/abs/1907.10641
277
+ [bbh]: https://paperswithcode.com/dataset/bbh
278
+ [drop]: https://arxiv.org/abs/1903.00161
279
+
280
+ #### STEM and code
281
+
282
+ | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
283
+ | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
284
+ | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
285
+ | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
286
+ | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
287
+ | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
288
+ | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
289
+ | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
290
+ | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
291
+ | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
292
+
293
+ [mmlu]: https://arxiv.org/abs/2009.03300
294
+ [agieval]: https://arxiv.org/abs/2304.06364
295
+ [math]: https://arxiv.org/abs/2103.03874
296
+ [gsm8k]: https://arxiv.org/abs/2110.14168
297
+ [gpqa]: https://arxiv.org/abs/2311.12022
298
+ [mbpp]: https://arxiv.org/abs/2108.07732
299
+ [humaneval]: https://arxiv.org/abs/2107.03374
300
+
301
+ #### Multilingual
302
+
303
+ | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
304
+ | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
305
+ | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
306
+ | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
307
+ | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
308
+ | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
309
+ | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
310
+ | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
311
+ | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
312
+
313
+ [mgsm]: https://arxiv.org/abs/2210.03057
314
+ [flores]: https://arxiv.org/abs/2106.03193
315
+ [xquad]: https://arxiv.org/abs/1910.11856v3
316
+ [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
317
+ [wmt24pp]: https://arxiv.org/abs/2502.12404v1
318
+ [eclektic]: https://arxiv.org/abs/2502.21228
319
+ [indicgenbench]: https://arxiv.org/abs/2404.16816
320
+
321
+ #### Multimodal
322
+
323
+ | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
324
+ | ------------------------------ |:-------------:|:--------------:|:--------------:|
325
+ | [COCOcap][coco-cap] | 102 | 111 | 116 |
326
+ | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
327
+ | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
328
+ | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
329
+ | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
330
+ | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
331
+ | [ReMI][remi] | 27.3 | 38.5 | 44.8 |
332
+ | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
333
+ | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
334
+ | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
335
+ | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
336
+ | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
337
+ | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
338
+ | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
339
+ | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
340
+
341
+ [coco-cap]: https://cocodataset.org/#home
342
+ [docvqa]: https://www.docvqa.org/
343
+ [info-vqa]: https://arxiv.org/abs/2104.12756
344
+ [mmmu]: https://arxiv.org/abs/2311.16502
345
+ [textvqa]: https://textvqa.org/
346
+ [realworldqa]: https://paperswithcode.com/dataset/realworldqa
347
+ [remi]: https://arxiv.org/html/2406.09175v1
348
+ [ai2d]: https://allenai.org/data/diagrams
349
+ [chartqa]: https://arxiv.org/abs/2203.10244
350
+ [vqav2]: https://visualqa.org/index.html
351
+ [blinkvqa]: https://arxiv.org/abs/2404.12390
352
+ [okvqa]: https://okvqa.allenai.org/
353
+ [tallyqa]: https://arxiv.org/abs/1810.12440
354
+ [ss-vqa]: https://arxiv.org/abs/1908.02660
355
+ [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
356
+
357
+ ## Ethics and Safety
358
+
359
+ Ethics and safety evaluation approach and results.
360
+
361
+ ### Evaluation Approach
362
+
363
+ Our evaluation methods include structured evaluations and internal red-teaming
364
+ testing of relevant content policies. Red-teaming was conducted by a number of
365
+ different teams, each with different goals and human evaluation metrics. These
366
+ models were evaluated against a number of different categories relevant to
367
+ ethics and safety, including:
368
+
369
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
370
+ covering child safety policies, including child sexual abuse and
371
+ exploitation.
372
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
373
+ covering safety policies including, harassment, violence and gore, and hate
374
+ speech.
375
+ - **Representational Harms**: Evaluation of text-to-text and image to text
376
+ prompts covering safety policies including bias, stereotyping, and harmful
377
+ associations or inaccuracies.
378
+
379
+ In addition to development level evaluations, we conduct "assurance
380
+ evaluations" which are our 'arms-length' internal evaluations for responsibility
381
+ governance decision making. They are conducted separately from the model
382
+ development team, to inform decision making about release. High level findings
383
+ are fed back to the model team, but prompt sets are held-out to prevent
384
+ overfitting and preserve the results' ability to inform decision making.
385
+ Assurance evaluation results are reported to our Responsibility & Safety Council
386
+ as part of release review.
387
+
388
+ ### Evaluation Results
389
+
390
+ For all areas of safety testing, we saw major improvements in the categories of
391
+ child safety, content safety, and representational harms relative to previous
392
+ Gemma models. All testing was conducted without safety filters to evaluate the
393
+ model capabilities and behaviors. For both text-to-text and image-to-text, and
394
+ across all model sizes, the model produced minimal policy violations, and showed
395
+ significant improvements over previous Gemma models' performance with respect
396
+ to ungrounded inferences. A limitation of our evaluations was they included only
397
+ English language prompts.
398
+
399
+ ## Usage and Limitations
400
+
401
+ These models have certain limitations that users should be aware of.
402
+
403
+ ### Intended Usage
404
+
405
+ Open vision-language models (VLMs) models have a wide range of applications
406
+ across various industries and domains. The following list of potential uses is
407
+ not comprehensive. The purpose of this list is to provide contextual information
408
+ about the possible use-cases that the model creators considered as part of model
409
+ training and development.
410
+
411
+ - Content Creation and Communication
412
+ - Text Generation: These models can be used to generate creative text
413
+ formats such as poems, scripts, code, marketing copy, and email drafts.
414
+ - Chatbots and Conversational AI: Power conversational interfaces
415
+ for customer service, virtual assistants, or interactive applications.
416
+ - Text Summarization: Generate concise summaries of a text corpus,
417
+ research papers, or reports.
418
+ - Image Data Extraction: These models can be used to extract,
419
+ interpret, and summarize visual data for text communications.
420
+ - Research and Education
421
+ - Natural Language Processing (NLP) and VLM Research: These
422
+ models can serve as a foundation for researchers to experiment with VLM
423
+ and NLP techniques, develop algorithms, and contribute to the
424
+ advancement of the field.
425
+ - Language Learning Tools: Support interactive language learning
426
+ experiences, aiding in grammar correction or providing writing practice.
427
+ - Knowledge Exploration: Assist researchers in exploring large
428
+ bodies of text by generating summaries or answering questions about
429
+ specific topics.
430
+
431
+ ### Limitations
432
+
433
+ - Training Data
434
+ - The quality and diversity of the training data significantly
435
+ influence the model's capabilities. Biases or gaps in the training data
436
+ can lead to limitations in the model's responses.
437
+ - The scope of the training dataset determines the subject areas
438
+ the model can handle effectively.
439
+ - Context and Task Complexity
440
+ - Models are better at tasks that can be framed with clear
441
+ prompts and instructions. Open-ended or highly complex tasks might be
442
+ challenging.
443
+ - A model's performance can be influenced by the amount of context
444
+ provided (longer context generally leads to better outputs, up to a
445
+ certain point).
446
+ - Language Ambiguity and Nuance
447
+ - Natural language is inherently complex. Models might struggle
448
+ to grasp subtle nuances, sarcasm, or figurative language.
449
+ - Factual Accuracy
450
+ - Models generate responses based on information they learned
451
+ from their training datasets, but they are not knowledge bases. They
452
+ may generate incorrect or outdated factual statements.
453
+ - Common Sense
454
+ - Models rely on statistical patterns in language. They might
455
+ lack the ability to apply common sense reasoning in certain situations.
456
+
457
+ ### Ethical Considerations and Risks
458
+
459
+ The development of vision-language models (VLMs) raises several ethical
460
+ concerns. In creating an open model, we have carefully considered the following:
461
+
462
+ - Bias and Fairness
463
+ - VLMs trained on large-scale, real-world text and image data can
464
+ reflect socio-cultural biases embedded in the training material. These
465
+ models underwent careful scrutiny, input data pre-processing described
466
+ and posterior evaluations reported in this card.
467
+ - Misinformation and Misuse
468
+ - VLMs can be misused to generate text that is false, misleading,
469
+ or harmful.
470
+ - Guidelines are provided for responsible use with the model, see the
471
+ [Responsible Generative AI Toolkit][rai-toolkit].
472
+ - Transparency and Accountability:
473
+ - This model card summarizes details on the models' architecture,
474
+ capabilities, limitations, and evaluation processes.
475
+ - A responsibly developed open model offers the opportunity to
476
+ share innovation by making VLM technology accessible to developers and
477
+ researchers across the AI ecosystem.
478
+
479
+ Risks identified and mitigations:
480
+
481
+ - **Perpetuation of biases**: It's encouraged to perform continuous
482
+ monitoring (using evaluation metrics, human review) and the exploration of
483
+ de-biasing techniques during model training, fine-tuning, and other use
484
+ cases.
485
+ - **Generation of harmful content**: Mechanisms and guidelines for content
486
+ safety are essential. Developers are encouraged to exercise caution and
487
+ implement appropriate content safety safeguards based on their specific
488
+ product policies and application use cases.
489
+ - **Misuse for malicious purposes**: Technical limitations and developer
490
+ and end-user education can help mitigate against malicious applications of
491
+ VLMs. Educational resources and reporting mechanisms for users to flag
492
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
493
+ [Gemma Prohibited Use Policy][prohibited-use].
494
+ - **Privacy violations**: Models were trained on data filtered for removal
495
+ of certain personal information and other sensitive data. Developers are
496
+ encouraged to adhere to privacy regulations with privacy-preserving
497
+ techniques.
498
+
499
+ ### Benefits
500
+
501
+ At the time of release, this family of models provides high-performance open
502
+ vision-language model implementations designed from the ground up for
503
+ responsible AI development compared to similarly sized models.
504
+
505
+ Using the benchmark evaluation metrics described in this document, these models
506
+ have shown to provide superior performance to other, comparably-sized open model
507
+ alternatives.
508
+
509
+ [g3-tech-report]: https://goo.gle/Gemma3Report
510
+ [rai-toolkit]: https://ai.google.dev/responsible
511
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
512
+ [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
513
+ [terms]: https://ai.google.dev/gemma/terms
514
+ [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
515
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
516
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
517
+ [sustainability]: https://sustainability.google/operating-sustainably/
518
+ [jax]: https://github.com/jax-ml/jax
519
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
520
+ [sustainability]: https://sustainability.google/operating-sustainably/
521
+ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "attn_logit_softcapping": null,
8
+ "bos_token_id": 2,
9
+ "cache_implementation": "hybrid",
10
+ "eos_token_id": [
11
+ 1,
12
+ 106
13
+ ],
14
+ "final_logit_softcapping": null,
15
+ "head_dim": 256,
16
+ "hidden_activation": "gelu_pytorch_tanh",
17
+ "hidden_size": 1152,
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 6912,
20
+ "max_position_embeddings": 32768,
21
+ "model_type": "gemma3_text",
22
+ "num_attention_heads": 4,
23
+ "num_hidden_layers": 26,
24
+ "num_key_value_heads": 1,
25
+ "pad_token_id": 0,
26
+ "query_pre_attn_scalar": 256,
27
+ "rms_norm_eps": 1e-06,
28
+ "rope_local_base_freq": 10000,
29
+ "rope_scaling": null,
30
+ "rope_theta": 1000000,
31
+ "sliding_window": 512,
32
+ "sliding_window_pattern": 6,
33
+ "torch_dtype": "bfloat16",
34
+ "transformers_version": "4.50.0.dev0",
35
+ "use_cache": true,
36
+ "vocab_size": 262144,
37
+ "quantization_config": {
38
+ "bits": 4,
39
+ "group_size": 32,
40
+ "quant_method": "awq",
41
+ "version": "gemm",
42
+ "zero_point": true
43
+ }
44
+ }
convert_flax.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ from pathlib import Path
3
+
4
+ import jax.numpy as jnp
5
+ import numpy as np
6
+ from safetensors.flax import save_file
7
+ from tqdm import tqdm
8
+
9
+ from gemma import gm
10
+
11
+
12
+ def flatten(x: jnp.ndarray, start: int = 0, end: int = -1):
13
+ if start < 0:
14
+ start += x.ndim
15
+ if end < 0:
16
+ end += x.ndim
17
+ new_shape = x.shape[:start] + (-1,) + x.shape[end + 1 :]
18
+ return x.reshape(new_shape)
19
+
20
+
21
+ def unflatten(x: jnp.ndarray, dim: int, sizes: tuple[int, ...]):
22
+ new_shape = x.shape[:dim] + tuple(sizes) + x.shape[dim + 1 :]
23
+ return x.reshape(new_shape)
24
+
25
+
26
+ # correct quantization parameters mean quantization error = 0 (or close to 0)
27
+ def check_groups(groups: jnp.ndarray, scales: jnp.ndarray, dim: int):
28
+ # groups: (a, b, c, 32, d, e, f)
29
+ # scales: (a, b, c, 1, d, e, f)
30
+ inv_scale = 1.0 / scales.clip(1e-12)
31
+ q_group = jnp.round(groups * inv_scale)
32
+ max_diff = jnp.abs(q_group * scales - groups).max(dim, keepdims=True)
33
+ return max_diff < 1e-6, max_diff
34
+
35
+
36
+ def find_scales(w: jnp.ndarray, dim: int, pbar: bool = True):
37
+ w = unflatten(w, dim, (-1, 32))
38
+ group_range = w.max(dim + 1, keepdims=True) - w.min(dim + 1, keepdims=True)
39
+
40
+ scales = np.zeros_like(group_range)
41
+ for q in tqdm(range(15, 0, -1), disable=not pbar):
42
+ try_scale = group_range / q
43
+ ok, _ = check_groups(w, try_scale, dim + 1)
44
+ scales[ok] = try_scale[ok]
45
+
46
+ ok, _ = check_groups(w, scales, dim + 1)
47
+ assert ok.all()
48
+
49
+ return scales.squeeze(dim + 1)
50
+
51
+
52
+ # convert to HF format first, then apply quantization
53
+ def convert_to_hf(params):
54
+ state_dict = dict()
55
+ # TODO: output projection
56
+ state_dict["model.embed_tokens.weight"] = params["embedder"]["input_embedding"]
57
+ state_dict["model.norm.weight"] = params["final_norm"]["scale"]
58
+
59
+ layer_idx = 0
60
+ while f"layer_{layer_idx}" in params:
61
+ prefix = f"model.layers.{layer_idx}."
62
+ layer_params = params[f"layer_{layer_idx}"]
63
+ state_dict[f"{prefix}input_layernorm.weight"] = layer_params["pre_attention_norm"]["scale"]
64
+ state_dict[f"{prefix}post_attention_layernorm.weight"] = layer_params["post_attention_norm"]["scale"]
65
+ state_dict[f"{prefix}pre_feedforward_layernorm.weight"] = layer_params["pre_ffw_norm"]["scale"]
66
+ state_dict[f"{prefix}post_feedforward_layernorm.weight"] = layer_params["post_ffw_norm"]["scale"]
67
+
68
+ prefix = f"model.layers.{layer_idx}.self_attn."
69
+ attn_params = layer_params["attn"]
70
+ state_dict[f"{prefix}q_norm.weight"] = attn_params["_query_norm"]["scale"]
71
+ state_dict[f"{prefix}k_norm.weight"] = attn_params["_key_norm"]["scale"]
72
+
73
+ # (num_heads, hidden_size, head_dim) -> (num_heads * head_dim, hidden_size)
74
+ state_dict[f"{prefix}q_proj.weight"] = flatten(attn_params["q_einsum"]["w"].transpose(0, 2, 1), end=1)
75
+ state_dict[f"{prefix}k_proj.weight"] = flatten(attn_params["kv_einsum"]["w"][0].transpose(0, 2, 1), end=1)
76
+ state_dict[f"{prefix}v_proj.weight"] = flatten(attn_params["kv_einsum"]["w"][1].transpose(0, 2, 1), end=1)
77
+
78
+ # (num_heads, head_dim, hidden_size) -> (hidden_size, num_heads * head_dim)
79
+ state_dict[f"{prefix}o_proj.weight"] = flatten(attn_params["attn_vec_einsum"]["w"], end=1).T
80
+
81
+ prefix = f"model.layers.{layer_idx}.mlp."
82
+ mlp_params = layer_params["mlp"]
83
+ state_dict[f"{prefix}gate_proj.weight"] = mlp_params["gating_einsum"][0] # NOTE: may need to transpose?
84
+ state_dict[f"{prefix}up_proj.weight"] = mlp_params["gating_einsum"][1]
85
+ state_dict[f"{prefix}down_proj.weight"] = mlp_params["linear"].T
86
+
87
+ layer_idx += 1
88
+
89
+ return state_dict
90
+
91
+
92
+ def convert_awq(state_dict: dict[str, jnp.ndarray]):
93
+ awq_state_dict = dict()
94
+
95
+ for k, v in tqdm(state_dict.items(), total=len(state_dict)):
96
+ # AWQ doesn't support INT4 embeddings
97
+ if k == "model.embed_tokens.weight" or v.ndim == 1:
98
+ awq_state_dict[k] = v.astype(jnp.bfloat16)
99
+ continue
100
+
101
+ assert v.ndim == 2
102
+ v = v.T # AWQ transpose the weight
103
+
104
+ # use numpy since jnp is very slow, likely due to bad memory management on CUDA
105
+ v = np.asarray(v)
106
+ K, N = v.shape
107
+ scales = find_scales(v, dim=0, pbar=False) # (K/32, N)
108
+ inv_scale = 1 / scales.clip(1e-12)
109
+ qweight = np.round(v.reshape(K // 32, 32, N) * inv_scale[:, None])
110
+
111
+ # AWQ is actually UINT4 (instead of INT4)
112
+ # hence, we will shift qweight up by 8 (even though Google AQT only uses [-7,7])
113
+ # and set zero_point = 8
114
+ qweight = (qweight + 8).astype(np.uint32)
115
+
116
+ # AWQ pack 8 int4 into UINT32 in the following layout (from high bits to low bits)
117
+ # [7 5 3 1 6 4 2 0] along the 2nd dim
118
+ qweight = qweight.reshape(K, N // 8, 8)
119
+ qweight_packed = (
120
+ (qweight[..., 7] << (7 * 4))
121
+ | (qweight[..., 5] << (6 * 4))
122
+ | (qweight[..., 3] << (5 * 4))
123
+ | (qweight[..., 1] << (4 * 4))
124
+ | (qweight[..., 6] << (3 * 4))
125
+ | (qweight[..., 4] << (2 * 4))
126
+ | (qweight[..., 2] << (1 * 4))
127
+ | (qweight[..., 0] << (0 * 4))
128
+ )
129
+ qweight_packed = qweight_packed.view(np.int32).reshape(K, N // 8)
130
+
131
+ prefix = k.removesuffix(".weight")
132
+ awq_state_dict[f"{prefix}.qweight"] = qweight_packed
133
+ awq_state_dict[f"{prefix}.qzeros"] = np.full((K // 32, N // 8), 0x8888_8888, dtype=np.uint32).view(np.int32)
134
+ awq_state_dict[f"{prefix}.scales"] = jnp.asarray(scales).astype(jnp.bfloat16)
135
+
136
+ return awq_state_dict
137
+
138
+
139
+ if __name__ == "__main__":
140
+ parser = argparse.ArgumentParser()
141
+ parser.add_argument("--ckpt_dir", required=True, type=Path)
142
+ parser.add_argument("--save_path", required=True, type=Path)
143
+ args = parser.parse_args()
144
+
145
+ params = gm.ckpts.load_params(args.ckpt_dir.absolute())
146
+ state_dict = convert_to_hf(params)
147
+ awq_state_dict = convert_awq(state_dict)
148
+ args.save_path.parent.mkdir(parents=True, exist_ok=True)
149
+ save_file(awq_state_dict, args.save_path)
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "cache_implementation": "hybrid",
5
+ "eos_token_id": [
6
+ 1,
7
+ 106
8
+ ],
9
+ "pad_token_id": 0,
10
+ "transformers_version": "4.50.0.dev0"
11
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34942eb77ac2e60a2f590a0302a84bf8dd0e861bbf0423cfab5bacb866a3a53a
3
+ size 1007720744
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
3
+ size 33384568
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff