rahul7star commited on
Commit
970d320
·
verified ·
1 Parent(s): 112252d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - google/gemma-3-4b-it
4
+ tags:
5
+ - torchao-my-repo
6
+ ---
7
+ # google/gemma-3-4b-it (Quantized)
8
+
9
+ ## Quantization Details
10
+ - **Quantization Type**: Int8WeightOnly
11
+ - **Group Size**: 128
12
+
13
+
14
+
15
+ # 📄 Original Model Info
16
+
17
+ ---
18
+ license: gemma
19
+ library_name: transformers
20
+ pipeline_tag: image-text-to-text
21
+ extra_gated_heading: Access Gemma on Hugging Face
22
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
23
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
24
+ Face and click below. Requests are processed immediately.
25
+ extra_gated_button_content: Acknowledge license
26
+ base_model: google/gemma-3-4b-pt
27
+ ---
28
+
29
+ # Gemma 3 model card
30
+
31
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
32
+
33
+ **Resources and Technical Documentation**:
34
+
35
+ * [Gemma 3 Technical Report][g3-tech-report]
36
+ * [Responsible Generative AI Toolkit][rai-toolkit]
37
+ * [Gemma on Kaggle][kaggle-gemma]
38
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma3]
39
+
40
+ **Terms of Use**: [Terms][terms]
41
+
42
+ **Authors**: Google DeepMind
43
+
44
+ ## Model Information
45
+
46
+ Summary description and brief definition of inputs and outputs.
47
+
48
+ ### Description
49
+
50
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
51
+ built from the same research and technology used to create the Gemini models.
52
+ Gemma 3 models are multimodal, handling text and image input and generating text
53
+ output, with open weights for both pre-trained variants and instruction-tuned
54
+ variants. Gemma 3 has a large, 128K context window, multilingual support in over
55
+ 140 languages, and is available in more sizes than previous versions. Gemma 3
56
+ models are well-suited for a variety of text generation and image understanding
57
+ tasks, including question answering, summarization, and reasoning. Their
58
+ relatively small size makes it possible to deploy them in environments with
59
+ limited resources such as laptops, desktops or your own cloud infrastructure,
60
+ democratizing access to state of the art AI models and helping foster innovation
61
+ for everyone.
62
+
63
+ ### Inputs and outputs
64
+
65
+ - **Input:**
66
+ - Text string, such as a question, a prompt, or a document to be summarized
67
+ - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
68
+ each
69
+ - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
70
+ 32K tokens for the 1B size
71
+
72
+ - **Output:**
73
+ - Generated text in response to the input, such as an answer to a
74
+ question, analysis of image content, or a summary of a document
75
+ - Total output context of 8192 tokens
76
+
77
+ ### Usage
78
+
79
+ Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
80
+
81
+ ```sh
82
+ $ pip install -U transformers
83
+ ```
84
+
85
+ Then, copy the snippet from the section that is relevant for your use case.
86
+
87
+ #### Running with the `pipeline` API
88
+
89
+ You can initialize the model and processor for inference with `pipeline` as follows.
90
+
91
+ ```python
92
+ from transformers import pipeline
93
+ import torch
94
+
95
+ pipe = pipeline(
96
+ "image-text-to-text",
97
+ model="google/gemma-3-4b-it",
98
+ device="cuda",
99
+ torch_dtype=torch.bfloat16
100
+ )
101
+ ```
102
+
103
+ With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
104
+
105
+ ```python
106
+ messages = [
107
+ {
108
+ "role": "system",
109
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
110
+ },
111
+ {
112
+ "role": "user",
113
+ "content": [
114
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
115
+ {"type": "text", "text": "What animal is on the candy?"}
116
+ ]
117
+ }
118
+ ]
119
+
120
+ output = pipe(text=messages, max_new_tokens=200)
121
+ print(output[0]["generated_text"][-1]["content"])
122
+ # Okay, let's take a look!
123
+ # Based on the image, the animal on the candy is a **turtle**.
124
+ # You can see the shell shape and the head and legs.
125
+ ```
126
+
127
+ #### Running the model on a single/multi GPU
128
+
129
+ ```python
130
+ # pip install accelerate
131
+
132
+ from transformers import AutoProcessor, Gemma3ForConditionalGeneration
133
+ from PIL import Image
134
+ import requests
135
+ import torch
136
+
137
+ model_id = "google/gemma-3-4b-it"
138
+
139
+ model = Gemma3ForConditionalGeneration.from_pretrained(
140
+ model_id, device_map="auto"
141
+ ).eval()
142
+
143
+ processor = AutoProcessor.from_pretrained(model_id)
144
+
145
+ messages = [
146
+ {
147
+ "role": "system",
148
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
149
+ },
150
+ {
151
+ "role": "user",
152
+ "content": [
153
+ {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
154
+ {"type": "text", "text": "Describe this image in detail."}
155
+ ]
156
+ }
157
+ ]
158
+
159
+ inputs = processor.apply_chat_template(
160
+ messages, add_generation_prompt=True, tokenize=True,
161
+ return_dict=True, return_tensors="pt"
162
+ ).to(model.device, dtype=torch.bfloat16)
163
+
164
+ input_len = inputs["input_ids"].shape[-1]
165
+
166
+ with torch.inference_mode():
167
+ generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
168
+ generation = generation[0][input_len:]
169
+
170
+ decoded = processor.decode(generation, skip_special_tokens=True)
171
+ print(decoded)
172
+
173
+ # **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
174
+ # focusing on a cluster of pink cosmos flowers and a busy bumblebee.
175
+ # It has a slightly soft, natural feel, likely captured in daylight.
176
+ ```
177
+
178
+
179
+ ### Citation
180
+
181
+ ```none
182
+ @article{gemma_2025,
183
+ title={Gemma 3},
184
+ url={https://goo.gle/Gemma3Report},
185
+ publisher={Kaggle},
186
+ author={Gemma Team},
187
+ year={2025}
188
+ }
189
+ ```
190
+
191
+ ## Model Data
192
+
193
+ Data used for model training and how the data was processed.
194
+
195
+ ### Training Dataset
196
+
197
+ These models were trained on a dataset of text data that includes a wide variety
198
+ of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
199
+ trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
200
+ 1B with 2 trillion tokens. Here are the key components:
201
+
202
+ - Web Documents: A diverse collection of web text ensures the model is
203
+ exposed to a broad range of linguistic styles, topics, and vocabulary. The
204
+ training dataset includes content in over 140 languages.
205
+ - Code: Exposing the model to code helps it to learn the syntax and
206
+ patterns of programming languages, which improves its ability to generate
207
+ code and understand code-related questions.
208
+ - Mathematics: Training on mathematical text helps the model learn logical
209
+ reasoning, symbolic representation, and to address mathematical queries.
210
+ - Images: A wide range of images enables the model to perform image
211
+ analysis and visual data extraction tasks.
212
+
213
+ The combination of these diverse data sources is crucial for training a powerful
214
+ multimodal model that can handle a wide variety of different tasks and data
215
+ formats.
216
+
217
+ ### Data Preprocessing
218
+
219
+ Here are the key data cleaning and filtering methods applied to the training
220
+ data:
221
+
222
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
223
+ was applied at multiple stages in the data preparation process to ensure
224
+ the exclusion of harmful and illegal content.
225
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
226
+ safe and reliable, automated techniques were used to filter out certain
227
+ personal information and other sensitive data from training sets.
228
+ - Additional methods: Filtering based on content quality and safety in
229
+ line with [our policies][safety-policies].
230
+
231
+ ## Implementation Information
232
+
233
+ Details about the model internals.
234
+
235
+ ### Hardware
236
+
237
+ Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
238
+ TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
239
+ computational power. TPUs, designed specifically for matrix operations common in
240
+ machine learning, offer several advantages in this domain:
241
+
242
+ - Performance: TPUs are specifically designed to handle the massive
243
+ computations involved in training VLMs. They can speed up training
244
+ considerably compared to CPUs.
245
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
246
+ allowing for the handling of large models and batch sizes during training.
247
+ This can lead to better model quality.
248
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
249
+ solution for handling the growing complexity of large foundation models.
250
+ You can distribute training across multiple TPU devices for faster and more
251
+ efficient processing.
252
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
253
+ cost-effective solution for training large models compared to CPU-based
254
+ infrastructure, especially when considering the time and resources saved
255
+ due to faster training.
256
+ - These advantages are aligned with
257
+ [Google's commitments to operate sustainably][sustainability].
258
+
259
+ ### Software
260
+
261
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
262
+
263
+ JAX allows researchers to take advantage of the latest generation of hardware,
264
+ including TPUs, for faster and more efficient training of large models. ML
265
+ Pathways is Google's latest effort to build artificially intelligent systems
266
+ capable of generalizing across multiple tasks. This is specially suitable for
267
+ foundation models, including large language models like these ones.
268
+
269
+ Together, JAX and ML Pathways are used as described in the
270
+ [paper about the Gemini family of models][gemini-2-paper]; *"the 'single
271
+ controller' programming model of Jax and Pathways allows a single Python
272
+ process to orchestrate the entire training run, dramatically simplifying the
273
+ development workflow."*
274
+
275
+ ## Evaluation
276
+
277
+ Model evaluation metrics and results.
278
+
279
+ ### Benchmark Results
280
+
281
+ These models were evaluated against a large collection of different datasets and
282
+ metrics to cover different aspects of text generation:
283
+
284
+ #### Reasoning and factuality
285
+
286
+ | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
287
+ | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
288
+ | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
289
+ | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
290
+ | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
291
+ | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
292
+ | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
293
+ | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
294
+ | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
295
+ | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
296
+ | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
297
+ | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
298
+ | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
299
+
300
+ [hellaswag]: https://arxiv.org/abs/1905.07830
301
+ [boolq]: https://arxiv.org/abs/1905.10044
302
+ [piqa]: https://arxiv.org/abs/1911.11641
303
+ [socialiqa]: https://arxiv.org/abs/1904.09728
304
+ [triviaqa]: https://arxiv.org/abs/1705.03551
305
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
306
+ [arc]: https://arxiv.org/abs/1911.01547
307
+ [winogrande]: https://arxiv.org/abs/1907.10641
308
+ [bbh]: https://paperswithcode.com/dataset/bbh
309
+ [drop]: https://arxiv.org/abs/1903.00161
310
+
311
+ #### STEM and code
312
+
313
+ | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
314
+ | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
315
+ | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
316
+ | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
317
+ | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
318
+ | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
319
+ | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
320
+ | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
321
+ | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
322
+ | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
323
+
324
+ [mmlu]: https://arxiv.org/abs/2009.03300
325
+ [agieval]: https://arxiv.org/abs/2304.06364
326
+ [math]: https://arxiv.org/abs/2103.03874
327
+ [gsm8k]: https://arxiv.org/abs/2110.14168
328
+ [gpqa]: https://arxiv.org/abs/2311.12022
329
+ [mbpp]: https://arxiv.org/abs/2108.07732
330
+ [humaneval]: https://arxiv.org/abs/2107.03374
331
+
332
+ #### Multilingual
333
+
334
+ | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
335
+ | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
336
+ | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
337
+ | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
338
+ | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
339
+ | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
340
+ | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
341
+ | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
342
+ | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
343
+
344
+ [mgsm]: https://arxiv.org/abs/2210.03057
345
+ [flores]: https://arxiv.org/abs/2106.03193
346
+ [xquad]: https://arxiv.org/abs/1910.11856v3
347
+ [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
348
+ [wmt24pp]: https://arxiv.org/abs/2502.12404v1
349
+ [eclektic]: https://arxiv.org/abs/2502.21228
350
+ [indicgenbench]: https://arxiv.org/abs/2404.16816
351
+
352
+ #### Multimodal
353
+
354
+ | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
355
+ | ------------------------------ |:-------------:|:--------------:|:--------------:|
356
+ | [COCOcap][coco-cap] | 102 | 111 | 116 |
357
+ | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
358
+ | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
359
+ | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
360
+ | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
361
+ | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
362
+ | [ReMI][remi] | 27.3 | 38.5 | 44.8 |
363
+ | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
364
+ | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
365
+ | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
366
+ | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
367
+ | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
368
+ | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
369
+ | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
370
+ | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
371
+
372
+ [coco-cap]: https://cocodataset.org/#home
373
+ [docvqa]: https://www.docvqa.org/
374
+ [info-vqa]: https://arxiv.org/abs/2104.12756
375
+ [mmmu]: https://arxiv.org/abs/2311.16502
376
+ [textvqa]: https://textvqa.org/
377
+ [realworldqa]: https://paperswithcode.com/dataset/realworldqa
378
+ [remi]: https://arxiv.org/html/2406.09175v1
379
+ [ai2d]: https://allenai.org/data/diagrams
380
+ [chartqa]: https://arxiv.org/abs/2203.10244
381
+ [vqav2]: https://visualqa.org/index.html
382
+ [blinkvqa]: https://arxiv.org/abs/2404.12390
383
+ [okvqa]: https://okvqa.allenai.org/
384
+ [tallyqa]: https://arxiv.org/abs/1810.12440
385
+ [ss-vqa]: https://arxiv.org/abs/1908.02660
386
+ [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
387
+
388
+ ## Ethics and Safety
389
+
390
+ Ethics and safety evaluation approach and results.
391
+
392
+ ### Evaluation Approach
393
+
394
+ Our evaluation methods include structured evaluations and internal red-teaming
395
+ testing of relevant content policies. Red-teaming was conducted by a number of
396
+ different teams, each with different goals and human evaluation metrics. These
397
+ models were evaluated against a number of different categories relevant to
398
+ ethics and safety, including:
399
+
400
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
401
+ covering child safety policies, including child sexual abuse and
402
+ exploitation.
403
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
404
+ covering safety policies including, harassment, violence and gore, and hate
405
+ speech.
406
+ - **Representational Harms**: Evaluation of text-to-text and image to text
407
+ prompts covering safety policies including bias, stereotyping, and harmful
408
+ associations or inaccuracies.
409
+
410
+ In addition to development level evaluations, we conduct "assurance
411
+ evaluations" which are our 'arms-length' internal evaluations for responsibility
412
+ governance decision making. They are conducted separately from the model
413
+ development team, to inform decision making about release. High level findings
414
+ are fed back to the model team, but prompt sets are held-out to prevent
415
+ overfitting and preserve the results' ability to inform decision making.
416
+ Assurance evaluation results are reported to our Responsibility & Safety Council
417
+ as part of release review.
418
+
419
+ ### Evaluation Results
420
+
421
+ For all areas of safety testing, we saw major improvements in the categories of
422
+ child safety, content safety, and representational harms relative to previous
423
+ Gemma models. All testing was conducted without safety filters to evaluate the
424
+ model capabilities and behaviors. For both text-to-text and image-to-text, and
425
+ across all model sizes, the model produced minimal policy violations, and showed
426
+ significant improvements over previous Gemma models' performance with respect
427
+ to ungrounded inferences. A limitation of our evaluations was they included only
428
+ English language prompts.
429
+
430
+ ## Usage and Limitations
431
+
432
+ These models have certain limitations that users should be aware of.
433
+
434
+ ### Intended Usage
435
+
436
+ Open vision-language models (VLMs) models have a wide range of applications
437
+ across various industries and domains. The following list of potential uses is
438
+ not comprehensive. The purpose of this list is to provide contextual information
439
+ about the possible use-cases that the model creators considered as part of model
440
+ training and development.
441
+
442
+ - Content Creation and Communication
443
+ - Text Generation: These models can be used to generate creative text
444
+ formats such as poems, scripts, code, marketing copy, and email drafts.
445
+ - Chatbots and Conversational AI: Power conversational interfaces
446
+ for customer service, virtual assistants, or interactive applications.
447
+ - Text Summarization: Generate concise summaries of a text corpus,
448
+ research papers, or reports.
449
+ - Image Data Extraction: These models can be used to extract,
450
+ interpret, and summarize visual data for text communications.
451
+ - Research and Education
452
+ - Natural Language Processing (NLP) and VLM Research: These
453
+ models can serve as a foundation for researchers to experiment with VLM
454
+ and NLP techniques, develop algorithms, and contribute to the
455
+ advancement of the field.
456
+ - Language Learning Tools: Support interactive language learning
457
+ experiences, aiding in grammar correction or providing writing practice.
458
+ - Knowledge Exploration: Assist researchers in exploring large
459
+ bodies of text by generating summaries or answering questions about
460
+ specific topics.
461
+
462
+ ### Limitations
463
+
464
+ - Training Data
465
+ - The quality and diversity of the training data significantly
466
+ influence the model's capabilities. Biases or gaps in the training data
467
+ can lead to limitations in the model's responses.
468
+ - The scope of the training dataset determines the subject areas
469
+ the model can handle effectively.
470
+ - Context and Task Complexity
471
+ - Models are better at tasks that can be framed with clear
472
+ prompts and instructions. Open-ended or highly complex tasks might be
473
+ challenging.
474
+ - A model's performance can be influenced by the amount of context
475
+ provided (longer context generally leads to better outputs, up to a
476
+ certain point).
477
+ - Language Ambiguity and Nuance
478
+ - Natural language is inherently complex. Models might struggle
479
+ to grasp subtle nuances, sarcasm, or figurative language.
480
+ - Factual Accuracy
481
+ - Models generate responses based on information they learned
482
+ from their training datasets, but they are not knowledge bases. They
483
+ may generate incorrect or outdated factual statements.
484
+ - Common Sense
485
+ - Models rely on statistical patterns in language. They might
486
+ lack the ability to apply common sense reasoning in certain situations.
487
+
488
+ ### Ethical Considerations and Risks
489
+
490
+ The development of vision-language models (VLMs) raises several ethical
491
+ concerns. In creating an open model, we have carefully considered the following:
492
+
493
+ - Bias and Fairness
494
+ - VLMs trained on large-scale, real-world text and image data can
495
+ reflect socio-cultural biases embedded in the training material. These
496
+ models underwent careful scrutiny, input data pre-processing described
497
+ and posterior evaluations reported in this card.
498
+ - Misinformation and Misuse
499
+ - VLMs can be misused to generate text that is false, misleading,
500
+ or harmful.
501
+ - Guidelines are provided for responsible use with the model, see the
502
+ [Responsible Generative AI Toolkit][rai-toolkit].
503
+ - Transparency and Accountability:
504
+ - This model card summarizes details on the models' architecture,
505
+ capabilities, limitations, and evaluation processes.
506
+ - A responsibly developed open model offers the opportunity to
507
+ share innovation by making VLM technology accessible to developers and
508
+ researchers across the AI ecosystem.
509
+
510
+ Risks identified and mitigations:
511
+
512
+ - **Perpetuation of biases**: It's encouraged to perform continuous
513
+ monitoring (using evaluation metrics, human review) and the exploration of
514
+ de-biasing techniques during model training, fine-tuning, and other use
515
+ cases.
516
+ - **Generation of harmful content**: Mechanisms and guidelines for content
517
+ safety are essential. Developers are encouraged to exercise caution and
518
+ implement appropriate content safety safeguards based on their specific
519
+ product policies and application use cases.
520
+ - **Misuse for malicious purposes**: Technical limitations and developer
521
+ and end-user education can help mitigate against malicious applications of
522
+ VLMs. Educational resources and reporting mechanisms for users to flag
523
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
524
+ [Gemma Prohibited Use Policy][prohibited-use].
525
+ - **Privacy violations**: Models were trained on data filtered for removal
526
+ of certain personal information and other sensitive data. Developers are
527
+ encouraged to adhere to privacy regulations with privacy-preserving
528
+ techniques.
529
+
530
+ ### Benefits
531
+
532
+ At the time of release, this family of models provides high-performance open
533
+ vision-language model implementations designed from the ground up for
534
+ responsible AI development compared to similarly sized models.
535
+
536
+ Using the benchmark evaluation metrics described in this document, these models
537
+ have shown to provide superior performance to other, comparably-sized open model
538
+ alternatives.
539
+
540
+ [g3-tech-report]: https://goo.gle/Gemma3Report
541
+ [rai-toolkit]: https://ai.google.dev/responsible
542
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
543
+ [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
544
+ [terms]: https://ai.google.dev/gemma/terms
545
+ [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
546
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
547
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
548
+ [sustainability]: https://sustainability.google/operating-sustainably/
549
+ [jax]: https://github.com/jax-ml/jax
550
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
551
+ [sustainability]: https://sustainability.google/operating-sustainably/
552
+ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
chat_template.jinja ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{ bos_token }}
2
+ {%- if messages[0]['role'] == 'system' -%}
3
+ {%- if messages[0]['content'] is string -%}
4
+ {%- set first_user_prefix = messages[0]['content'] + '
5
+
6
+ ' -%}
7
+ {%- else -%}
8
+ {%- set first_user_prefix = messages[0]['content'][0]['text'] + '
9
+
10
+ ' -%}
11
+ {%- endif -%}
12
+ {%- set loop_messages = messages[1:] -%}
13
+ {%- else -%}
14
+ {%- set first_user_prefix = "" -%}
15
+ {%- set loop_messages = messages -%}
16
+ {%- endif -%}
17
+ {%- for message in loop_messages -%}
18
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
19
+ {{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
20
+ {%- endif -%}
21
+ {%- if (message['role'] == 'assistant') -%}
22
+ {%- set role = "model" -%}
23
+ {%- else -%}
24
+ {%- set role = message['role'] -%}
25
+ {%- endif -%}
26
+ {{ '<start_of_turn>' + role + '
27
+ ' + (first_user_prefix if loop.first else "") }}
28
+ {%- if message['content'] is string -%}
29
+ {{ message['content'] | trim }}
30
+ {%- elif message['content'] is iterable -%}
31
+ {%- for item in message['content'] -%}
32
+ {%- if item['type'] == 'image' -%}
33
+ {{ '<start_of_image>' }}
34
+ {%- elif item['type'] == 'text' -%}
35
+ {{ item['text'] | trim }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{ raise_exception("Invalid content type") }}
40
+ {%- endif -%}
41
+ {{ '<end_of_turn>
42
+ ' }}
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ {{'<start_of_turn>model
46
+ '}}
47
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3Model"
4
+ ],
5
+ "boi_token_index": 255999,
6
+ "dtype": "bfloat16",
7
+ "eoi_token_index": 256000,
8
+ "eos_token_id": [
9
+ 1,
10
+ 106
11
+ ],
12
+ "image_token_index": 262144,
13
+ "initializer_range": 0.02,
14
+ "mm_tokens_per_image": 256,
15
+ "model_type": "gemma3",
16
+ "quantization_config": {
17
+ "include_input_output_embeddings": false,
18
+ "modules_to_not_convert": null,
19
+ "quant_method": "torchao",
20
+ "quant_type": {
21
+ "default": {
22
+ "_data": {
23
+ "group_size": null,
24
+ "set_inductor_config": true
25
+ },
26
+ "_type": "Int8WeightOnlyConfig",
27
+ "_version": 1
28
+ }
29
+ },
30
+ "quant_type_kwargs": {},
31
+ "untie_embedding_weights": false
32
+ },
33
+ "text_config": {
34
+ "_sliding_window_pattern": 6,
35
+ "attention_bias": false,
36
+ "attention_dropout": 0.0,
37
+ "attn_logit_softcapping": null,
38
+ "final_logit_softcapping": null,
39
+ "head_dim": 256,
40
+ "hidden_activation": "gelu_pytorch_tanh",
41
+ "hidden_size": 2560,
42
+ "initializer_range": 0.02,
43
+ "intermediate_size": 10240,
44
+ "layer_types": [
45
+ "sliding_attention",
46
+ "sliding_attention",
47
+ "sliding_attention",
48
+ "sliding_attention",
49
+ "sliding_attention",
50
+ "full_attention",
51
+ "sliding_attention",
52
+ "sliding_attention",
53
+ "sliding_attention",
54
+ "sliding_attention",
55
+ "sliding_attention",
56
+ "full_attention",
57
+ "sliding_attention",
58
+ "sliding_attention",
59
+ "sliding_attention",
60
+ "sliding_attention",
61
+ "sliding_attention",
62
+ "full_attention",
63
+ "sliding_attention",
64
+ "sliding_attention",
65
+ "sliding_attention",
66
+ "sliding_attention",
67
+ "sliding_attention",
68
+ "full_attention",
69
+ "sliding_attention",
70
+ "sliding_attention",
71
+ "sliding_attention",
72
+ "sliding_attention",
73
+ "sliding_attention",
74
+ "full_attention",
75
+ "sliding_attention",
76
+ "sliding_attention",
77
+ "sliding_attention",
78
+ "sliding_attention"
79
+ ],
80
+ "max_position_embeddings": 131072,
81
+ "model_type": "gemma3_text",
82
+ "num_attention_heads": 8,
83
+ "num_hidden_layers": 34,
84
+ "num_key_value_heads": 4,
85
+ "query_pre_attn_scalar": 256,
86
+ "rms_norm_eps": 1e-06,
87
+ "rope_local_base_freq": 10000.0,
88
+ "rope_scaling": {
89
+ "factor": 8.0,
90
+ "rope_type": "linear"
91
+ },
92
+ "rope_theta": 1000000.0,
93
+ "sliding_window": 1024,
94
+ "use_cache": true,
95
+ "vocab_size": 262208
96
+ },
97
+ "transformers_version": "4.56.2",
98
+ "vision_config": {
99
+ "attention_dropout": 0.0,
100
+ "hidden_act": "gelu_pytorch_tanh",
101
+ "hidden_size": 1152,
102
+ "image_size": 896,
103
+ "intermediate_size": 4304,
104
+ "layer_norm_eps": 1e-06,
105
+ "model_type": "siglip_vision_model",
106
+ "num_attention_heads": 16,
107
+ "num_channels": 3,
108
+ "num_hidden_layers": 27,
109
+ "patch_size": 14,
110
+ "vision_use_head": false
111
+ }
112
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e1c28aae65896fc8c05acb03378dfe9e86468a3b0cf32e3a8678be10a120e11
3
+ size 4993869454
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
3
+ size 33384568
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff