bugwei commited on
Commit
47e02c3
·
verified ·
1 Parent(s): 85d5f41

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,557 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - google/gemma-3-27b-it
4
+ license: gemma
5
+ library_name: transformers
6
+ pipeline_tag: image-text-to-text
7
+ extra_gated_heading: Access Gemma on Hugging Face
8
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
9
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
10
+ Face and click below. Requests are processed immediately.
11
+ extra_gated_button_content: Acknowledge license
12
+ tags:
13
+ - bnb-my-repo
14
+ ---
15
+ # google/gemma-3-27b-it (Quantized)
16
+
17
+ ## Description
18
+ This model is a quantized version of the original model [`google/gemma-3-27b-it`](https://huggingface.co/google/gemma-3-27b-it).
19
+
20
+ It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space.
21
+
22
+ ## Quantization Details
23
+ - **Quantization Type**: int4
24
+ - **bnb_4bit_quant_type**: nf4
25
+ - **bnb_4bit_use_double_quant**: True
26
+ - **bnb_4bit_compute_dtype**: bfloat16
27
+ - **bnb_4bit_quant_storage**: bfloat16
28
+
29
+
30
+
31
+ # 📄 Original Model Information
32
+
33
+
34
+
35
+ # Gemma 3 model card
36
+
37
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
38
+
39
+ **Resources and Technical Documentation**:
40
+
41
+ * [Gemma 3 Technical Report][g3-tech-report]
42
+ * [Responsible Generative AI Toolkit][rai-toolkit]
43
+ * [Gemma on Kaggle][kaggle-gemma]
44
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma3]
45
+
46
+ **Terms of Use**: [Terms][terms]
47
+
48
+ **Authors**: Google DeepMind
49
+
50
+ ## Model Information
51
+
52
+ Summary description and brief definition of inputs and outputs.
53
+
54
+ ### Description
55
+
56
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
57
+ built from the same research and technology used to create the Gemini models.
58
+ Gemma 3 models are multimodal, handling text and image input and generating text
59
+ output, with open weights for both pre-trained variants and instruction-tuned
60
+ variants. Gemma 3 has a large, 128K context window, multilingual support in over
61
+ 140 languages, and is available in more sizes than previous versions. Gemma 3
62
+ models are well-suited for a variety of text generation and image understanding
63
+ tasks, including question answering, summarization, and reasoning. Their
64
+ relatively small size makes it possible to deploy them in environments with
65
+ limited resources such as laptops, desktops or your own cloud infrastructure,
66
+ democratizing access to state of the art AI models and helping foster innovation
67
+ for everyone.
68
+
69
+ ### Inputs and outputs
70
+
71
+ - **Input:**
72
+ - Text string, such as a question, a prompt, or a document to be summarized
73
+ - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
74
+ each
75
+ - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
76
+ 32K tokens for the 1B size
77
+
78
+ - **Output:**
79
+ - Generated text in response to the input, such as an answer to a
80
+ question, analysis of image content, or a summary of a document
81
+ - Total output context of 8192 tokens
82
+
83
+ ### Usage
84
+
85
+ Below there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
86
+
87
+ ```sh
88
+ $ pip install -U transformers
89
+ ```
90
+
91
+ Then, copy the snippet from the section that is relevant for your use case.
92
+
93
+ #### Running with the `pipeline` API
94
+
95
+ You can initialize the model and processor for inference with `pipeline` as follows.
96
+
97
+ ```python
98
+ from transformers import pipeline
99
+ import torch
100
+
101
+ pipe = pipeline(
102
+ "image-text-to-text",
103
+ model="google/gemma-3-27b-it",
104
+ device="cuda",
105
+ torch_dtype=torch.bfloat16
106
+ )
107
+ ```
108
+
109
+ With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
110
+
111
+ ```python
112
+ messages = [
113
+ {
114
+ "role": "system",
115
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
116
+ },
117
+ {
118
+ "role": "user",
119
+ "content": [
120
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
121
+ {"type": "text", "text": "What animal is on the candy?"}
122
+ ]
123
+ }
124
+ ]
125
+
126
+ output = pipe(text=messages, max_new_tokens=200)
127
+ print(output[0]["generated_text"][-1]["content"])
128
+ # Okay, let's take a look!
129
+ # Based on the image, the animal on the candy is a **turtle**.
130
+ # You can see the shell shape and the head and legs.
131
+ ```
132
+
133
+ #### Running the model on a single/multi GPU
134
+
135
+ ```python
136
+ # pip install accelerate
137
+
138
+ from transformers import AutoProcessor, Gemma3ForConditionalGeneration
139
+ from PIL import Image
140
+ import requests
141
+ import torch
142
+
143
+ model_id = "google/gemma-3-27b-it"
144
+
145
+ model = Gemma3ForConditionalGeneration.from_pretrained(
146
+ model_id, device_map="auto"
147
+ ).eval()
148
+
149
+ processor = AutoProcessor.from_pretrained(model_id)
150
+
151
+ messages = [
152
+ {
153
+ "role": "system",
154
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
155
+ },
156
+ {
157
+ "role": "user",
158
+ "content": [
159
+ {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
160
+ {"type": "text", "text": "Describe this image in detail."}
161
+ ]
162
+ }
163
+ ]
164
+
165
+ inputs = processor.apply_chat_template(
166
+ messages, add_generation_prompt=True, tokenize=True,
167
+ return_dict=True, return_tensors="pt"
168
+ ).to(model.device, dtype=torch.bfloat16)
169
+
170
+ input_len = inputs["input_ids"].shape[-1]
171
+
172
+ with torch.inference_mode():
173
+ generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
174
+ generation = generation[0][input_len:]
175
+
176
+ decoded = processor.decode(generation, skip_special_tokens=True)
177
+ print(decoded)
178
+
179
+ # **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
180
+ # focusing on a cluster of pink cosmos flowers and a busy bumblebee.
181
+ # It has a slightly soft, natural feel, likely captured in daylight.
182
+ ```
183
+
184
+ ### Citation
185
+
186
+ ```none
187
+ @article{gemma_2025,
188
+ title={Gemma 3},
189
+ url={https://goo.gle/Gemma3Report},
190
+ publisher={Kaggle},
191
+ author={Gemma Team},
192
+ year={2025}
193
+ }
194
+ ```
195
+
196
+ ## Model Data
197
+
198
+ Data used for model training and how the data was processed.
199
+
200
+ ### Training Dataset
201
+
202
+ These models were trained on a dataset of text data that includes a wide variety
203
+ of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
204
+ trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
205
+ 1B with 2 trillion tokens. Here are the key components:
206
+
207
+ - Web Documents: A diverse collection of web text ensures the model is
208
+ exposed to a broad range of linguistic styles, topics, and vocabulary. The
209
+ training dataset includes content in over 140 languages.
210
+ - Code: Exposing the model to code helps it to learn the syntax and
211
+ patterns of programming languages, which improves its ability to generate
212
+ code and understand code-related questions.
213
+ - Mathematics: Training on mathematical text helps the model learn logical
214
+ reasoning, symbolic representation, and to address mathematical queries.
215
+ - Images: A wide range of images enables the model to perform image
216
+ analysis and visual data extraction tasks.
217
+
218
+ The combination of these diverse data sources is crucial for training a powerful
219
+ multimodal model that can handle a wide variety of different tasks and data
220
+ formats.
221
+
222
+ ### Data Preprocessing
223
+
224
+ Here are the key data cleaning and filtering methods applied to the training
225
+ data:
226
+
227
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
228
+ was applied at multiple stages in the data preparation process to ensure
229
+ the exclusion of harmful and illegal content.
230
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
231
+ safe and reliable, automated techniques were used to filter out certain
232
+ personal information and other sensitive data from training sets.
233
+ - Additional methods: Filtering based on content quality and safety in
234
+ line with [our policies][safety-policies].
235
+
236
+ ## Implementation Information
237
+
238
+ Details about the model internals.
239
+
240
+ ### Hardware
241
+
242
+ Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
243
+ TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
244
+ computational power. TPUs, designed specifically for matrix operations common in
245
+ machine learning, offer several advantages in this domain:
246
+
247
+ - Performance: TPUs are specifically designed to handle the massive
248
+ computations involved in training VLMs. They can speed up training
249
+ considerably compared to CPUs.
250
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
251
+ allowing for the handling of large models and batch sizes during training.
252
+ This can lead to better model quality.
253
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
254
+ solution for handling the growing complexity of large foundation models.
255
+ You can distribute training across multiple TPU devices for faster and more
256
+ efficient processing.
257
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
258
+ cost-effective solution for training large models compared to CPU-based
259
+ infrastructure, especially when considering the time and resources saved
260
+ due to faster training.
261
+ - These advantages are aligned with
262
+ [Google's commitments to operate sustainably][sustainability].
263
+
264
+ ### Software
265
+
266
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
267
+
268
+ JAX allows researchers to take advantage of the latest generation of hardware,
269
+ including TPUs, for faster and more efficient training of large models. ML
270
+ Pathways is Google's latest effort to build artificially intelligent systems
271
+ capable of generalizing across multiple tasks. This is specially suitable for
272
+ foundation models, including large language models like these ones.
273
+
274
+ Together, JAX and ML Pathways are used as described in the
275
+ [paper about the Gemini family of models][gemini-2-paper]; *"the 'single
276
+ controller' programming model of Jax and Pathways allows a single Python
277
+ process to orchestrate the entire training run, dramatically simplifying the
278
+ development workflow."*
279
+
280
+ ## Evaluation
281
+
282
+ Model evaluation metrics and results.
283
+
284
+ ### Benchmark Results
285
+
286
+ These models were evaluated against a large collection of different datasets and
287
+ metrics to cover different aspects of text generation:
288
+
289
+ #### Reasoning and factuality
290
+
291
+ | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
292
+ | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
293
+ | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
294
+ | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
295
+ | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
296
+ | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
297
+ | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
298
+ | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
299
+ | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
300
+ | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
301
+ | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
302
+ | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
303
+ | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
304
+
305
+ [hellaswag]: https://arxiv.org/abs/1905.07830
306
+ [boolq]: https://arxiv.org/abs/1905.10044
307
+ [piqa]: https://arxiv.org/abs/1911.11641
308
+ [socialiqa]: https://arxiv.org/abs/1904.09728
309
+ [triviaqa]: https://arxiv.org/abs/1705.03551
310
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
311
+ [arc]: https://arxiv.org/abs/1911.01547
312
+ [winogrande]: https://arxiv.org/abs/1907.10641
313
+ [bbh]: https://paperswithcode.com/dataset/bbh
314
+ [drop]: https://arxiv.org/abs/1903.00161
315
+
316
+ #### STEM and code
317
+
318
+ | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
319
+ | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
320
+ | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
321
+ | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
322
+ | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
323
+ | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
324
+ | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
325
+ | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
326
+ | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
327
+ | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
328
+
329
+ [mmlu]: https://arxiv.org/abs/2009.03300
330
+ [agieval]: https://arxiv.org/abs/2304.06364
331
+ [math]: https://arxiv.org/abs/2103.03874
332
+ [gsm8k]: https://arxiv.org/abs/2110.14168
333
+ [gpqa]: https://arxiv.org/abs/2311.12022
334
+ [mbpp]: https://arxiv.org/abs/2108.07732
335
+ [humaneval]: https://arxiv.org/abs/2107.03374
336
+
337
+ #### Multilingual
338
+
339
+ | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
340
+ | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
341
+ | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
342
+ | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
343
+ | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
344
+ | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
345
+ | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
346
+ | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
347
+ | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
348
+
349
+ [mgsm]: https://arxiv.org/abs/2210.03057
350
+ [flores]: https://arxiv.org/abs/2106.03193
351
+ [xquad]: https://arxiv.org/abs/1910.11856v3
352
+ [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
353
+ [wmt24pp]: https://arxiv.org/abs/2502.12404v1
354
+ [eclektic]: https://arxiv.org/abs/2502.21228
355
+ [indicgenbench]: https://arxiv.org/abs/2404.16816
356
+
357
+ #### Multimodal
358
+
359
+ | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
360
+ | ------------------------------ |:-------------:|:--------------:|:--------------:|
361
+ | [COCOcap][coco-cap] | 102 | 111 | 116 |
362
+ | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
363
+ | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
364
+ | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
365
+ | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
366
+ | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
367
+ | [ReMI][remi] | 27.3 | 38.5 | 44.8 |
368
+ | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
369
+ | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
370
+ | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
371
+ | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
372
+ | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
373
+ | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
374
+ | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
375
+ | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
376
+
377
+ [coco-cap]: https://cocodataset.org/#home
378
+ [docvqa]: https://www.docvqa.org/
379
+ [info-vqa]: https://arxiv.org/abs/2104.12756
380
+ [mmmu]: https://arxiv.org/abs/2311.16502
381
+ [textvqa]: https://textvqa.org/
382
+ [realworldqa]: https://paperswithcode.com/dataset/realworldqa
383
+ [remi]: https://arxiv.org/html/2406.09175v1
384
+ [ai2d]: https://allenai.org/data/diagrams
385
+ [chartqa]: https://arxiv.org/abs/2203.10244
386
+ [vqav2]: https://visualqa.org/index.html
387
+ [blinkvqa]: https://arxiv.org/abs/2404.12390
388
+ [okvqa]: https://okvqa.allenai.org/
389
+ [tallyqa]: https://arxiv.org/abs/1810.12440
390
+ [ss-vqa]: https://arxiv.org/abs/1908.02660
391
+ [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
392
+
393
+ ## Ethics and Safety
394
+
395
+ Ethics and safety evaluation approach and results.
396
+
397
+ ### Evaluation Approach
398
+
399
+ Our evaluation methods include structured evaluations and internal red-teaming
400
+ testing of relevant content policies. Red-teaming was conducted by a number of
401
+ different teams, each with different goals and human evaluation metrics. These
402
+ models were evaluated against a number of different categories relevant to
403
+ ethics and safety, including:
404
+
405
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
406
+ covering child safety policies, including child sexual abuse and
407
+ exploitation.
408
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
409
+ covering safety policies including, harassment, violence and gore, and hate
410
+ speech.
411
+ - **Representational Harms**: Evaluation of text-to-text and image to text
412
+ prompts covering safety policies including bias, stereotyping, and harmful
413
+ associations or inaccuracies.
414
+
415
+ In addition to development level evaluations, we conduct "assurance
416
+ evaluations" which are our 'arms-length' internal evaluations for responsibility
417
+ governance decision making. They are conducted separately from the model
418
+ development team, to inform decision making about release. High level findings
419
+ are fed back to the model team, but prompt sets are held-out to prevent
420
+ overfitting and preserve the results' ability to inform decision making.
421
+ Assurance evaluation results are reported to our Responsibility & Safety Council
422
+ as part of release review.
423
+
424
+ ### Evaluation Results
425
+
426
+ For all areas of safety testing, we saw major improvements in the categories of
427
+ child safety, content safety, and representational harms relative to previous
428
+ Gemma models. All testing was conducted without safety filters to evaluate the
429
+ model capabilities and behaviors. For both text-to-text and image-to-text, and
430
+ across all model sizes, the model produced minimal policy violations, and showed
431
+ significant improvements over previous Gemma models' performance with respect
432
+ to ungrounded inferences. A limitation of our evaluations was they included only
433
+ English language prompts.
434
+
435
+ ## Usage and Limitations
436
+
437
+ These models have certain limitations that users should be aware of.
438
+
439
+ ### Intended Usage
440
+
441
+ Open vision-language models (VLMs) models have a wide range of applications
442
+ across various industries and domains. The following list of potential uses is
443
+ not comprehensive. The purpose of this list is to provide contextual information
444
+ about the possible use-cases that the model creators considered as part of model
445
+ training and development.
446
+
447
+ - Content Creation and Communication
448
+ - Text Generation: These models can be used to generate creative text
449
+ formats such as poems, scripts, code, marketing copy, and email drafts.
450
+ - Chatbots and Conversational AI: Power conversational interfaces
451
+ for customer service, virtual assistants, or interactive applications.
452
+ - Text Summarization: Generate concise summaries of a text corpus,
453
+ research papers, or reports.
454
+ - Image Data Extraction: These models can be used to extract,
455
+ interpret, and summarize visual data for text communications.
456
+ - Research and Education
457
+ - Natural Language Processing (NLP) and VLM Research: These
458
+ models can serve as a foundation for researchers to experiment with VLM
459
+ and NLP techniques, develop algorithms, and contribute to the
460
+ advancement of the field.
461
+ - Language Learning Tools: Support interactive language learning
462
+ experiences, aiding in grammar correction or providing writing practice.
463
+ - Knowledge Exploration: Assist researchers in exploring large
464
+ bodies of text by generating summaries or answering questions about
465
+ specific topics.
466
+
467
+ ### Limitations
468
+
469
+ - Training Data
470
+ - The quality and diversity of the training data significantly
471
+ influence the model's capabilities. Biases or gaps in the training data
472
+ can lead to limitations in the model's responses.
473
+ - The scope of the training dataset determines the subject areas
474
+ the model can handle effectively.
475
+ - Context and Task Complexity
476
+ - Models are better at tasks that can be framed with clear
477
+ prompts and instructions. Open-ended or highly complex tasks might be
478
+ challenging.
479
+ - A model's performance can be influenced by the amount of context
480
+ provided (longer context generally leads to better outputs, up to a
481
+ certain point).
482
+ - Language Ambiguity and Nuance
483
+ - Natural language is inherently complex. Models might struggle
484
+ to grasp subtle nuances, sarcasm, or figurative language.
485
+ - Factual Accuracy
486
+ - Models generate responses based on information they learned
487
+ from their training datasets, but they are not knowledge bases. They
488
+ may generate incorrect or outdated factual statements.
489
+ - Common Sense
490
+ - Models rely on statistical patterns in language. They might
491
+ lack the ability to apply common sense reasoning in certain situations.
492
+
493
+ ### Ethical Considerations and Risks
494
+
495
+ The development of vision-language models (VLMs) raises several ethical
496
+ concerns. In creating an open model, we have carefully considered the following:
497
+
498
+ - Bias and Fairness
499
+ - VLMs trained on large-scale, real-world text and image data can
500
+ reflect socio-cultural biases embedded in the training material. These
501
+ models underwent careful scrutiny, input data pre-processing described
502
+ and posterior evaluations reported in this card.
503
+ - Misinformation and Misuse
504
+ - VLMs can be misused to generate text that is false, misleading,
505
+ or harmful.
506
+ - Guidelines are provided for responsible use with the model, see the
507
+ [Responsible Generative AI Toolkit][rai-toolkit].
508
+ - Transparency and Accountability:
509
+ - This model card summarizes details on the models' architecture,
510
+ capabilities, limitations, and evaluation processes.
511
+ - A responsibly developed open model offers the opportunity to
512
+ share innovation by making VLM technology accessible to developers and
513
+ researchers across the AI ecosystem.
514
+
515
+ Risks identified and mitigations:
516
+
517
+ - **Perpetuation of biases**: It's encouraged to perform continuous
518
+ monitoring (using evaluation metrics, human review) and the exploration of
519
+ de-biasing techniques during model training, fine-tuning, and other use
520
+ cases.
521
+ - **Generation of harmful content**: Mechanisms and guidelines for content
522
+ safety are essential. Developers are encouraged to exercise caution and
523
+ implement appropriate content safety safeguards based on their specific
524
+ product policies and application use cases.
525
+ - **Misuse for malicious purposes**: Technical limitations and developer
526
+ and end-user education can help mitigate against malicious applications of
527
+ VLMs. Educational resources and reporting mechanisms for users to flag
528
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
529
+ [Gemma Prohibited Use Policy][prohibited-use].
530
+ - **Privacy violations**: Models were trained on data filtered for removal
531
+ of certain personal information and other sensitive data. Developers are
532
+ encouraged to adhere to privacy regulations with privacy-preserving
533
+ techniques.
534
+
535
+ ### Benefits
536
+
537
+ At the time of release, this family of models provides high-performance open
538
+ vision-language model implementations designed from the ground up for
539
+ responsible AI development compared to similarly sized models.
540
+
541
+ Using the benchmark evaluation metrics described in this document, these models
542
+ have shown to provide superior performance to other, comparably-sized open model
543
+ alternatives.
544
+
545
+ [g3-tech-report]: https://goo.gle/Gemma3Report
546
+ [rai-toolkit]: https://ai.google.dev/responsible
547
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
548
+ [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
549
+ [terms]: https://ai.google.dev/gemma/terms
550
+ [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
551
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
552
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
553
+ [sustainability]: https://sustainability.google/operating-sustainably/
554
+ [jax]: https://github.com/jax-ml/jax
555
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
556
+ [sustainability]: https://sustainability.google/operating-sustainably/
557
+ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
chat_template.jinja ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{ bos_token }}
2
+ {%- if messages[0]['role'] == 'system' -%}
3
+ {%- if messages[0]['content'] is string -%}
4
+ {%- set first_user_prefix = messages[0]['content'] + '
5
+
6
+ ' -%}
7
+ {%- else -%}
8
+ {%- set first_user_prefix = messages[0]['content'][0]['text'] + '
9
+
10
+ ' -%}
11
+ {%- endif -%}
12
+ {%- set loop_messages = messages[1:] -%}
13
+ {%- else -%}
14
+ {%- set first_user_prefix = "" -%}
15
+ {%- set loop_messages = messages -%}
16
+ {%- endif -%}
17
+ {%- for message in loop_messages -%}
18
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
19
+ {{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
20
+ {%- endif -%}
21
+ {%- if (message['role'] == 'assistant') -%}
22
+ {%- set role = "model" -%}
23
+ {%- else -%}
24
+ {%- set role = message['role'] -%}
25
+ {%- endif -%}
26
+ {{ '<start_of_turn>' + role + '
27
+ ' + (first_user_prefix if loop.first else "") }}
28
+ {%- if message['content'] is string -%}
29
+ {{ message['content'] | trim }}
30
+ {%- elif message['content'] is iterable -%}
31
+ {%- for item in message['content'] -%}
32
+ {%- if item['type'] == 'image' -%}
33
+ {{ '<start_of_image>' }}
34
+ {%- elif item['type'] == 'text' -%}
35
+ {{ item['text'] | trim }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{ raise_exception("Invalid content type") }}
40
+ {%- endif -%}
41
+ {{ '<end_of_turn>
42
+ ' }}
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ {{'<start_of_turn>model
46
+ '}}
47
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3Model"
4
+ ],
5
+ "boi_token_index": 255999,
6
+ "eoi_token_index": 256000,
7
+ "eos_token_id": [
8
+ 1,
9
+ 106
10
+ ],
11
+ "image_token_index": 262144,
12
+ "initializer_range": 0.02,
13
+ "mm_tokens_per_image": 256,
14
+ "model_type": "gemma3",
15
+ "quantization_config": {
16
+ "_load_in_4bit": true,
17
+ "_load_in_8bit": false,
18
+ "bnb_4bit_compute_dtype": "bfloat16",
19
+ "bnb_4bit_quant_storage": "bfloat16",
20
+ "bnb_4bit_quant_type": "nf4",
21
+ "bnb_4bit_use_double_quant": true,
22
+ "llm_int8_enable_fp32_cpu_offload": false,
23
+ "llm_int8_has_fp16_weight": false,
24
+ "llm_int8_skip_modules": null,
25
+ "llm_int8_threshold": 6.0,
26
+ "load_in_4bit": true,
27
+ "load_in_8bit": false,
28
+ "quant_method": "bitsandbytes"
29
+ },
30
+ "text_config": {
31
+ "attention_bias": false,
32
+ "attention_dropout": 0.0,
33
+ "attn_logit_softcapping": null,
34
+ "final_logit_softcapping": null,
35
+ "head_dim": 128,
36
+ "hidden_activation": "gelu_pytorch_tanh",
37
+ "hidden_size": 5376,
38
+ "initializer_range": 0.02,
39
+ "intermediate_size": 21504,
40
+ "layer_types": [
41
+ "sliding_attention",
42
+ "sliding_attention",
43
+ "sliding_attention",
44
+ "sliding_attention",
45
+ "sliding_attention",
46
+ "full_attention",
47
+ "sliding_attention",
48
+ "sliding_attention",
49
+ "sliding_attention",
50
+ "sliding_attention",
51
+ "sliding_attention",
52
+ "full_attention",
53
+ "sliding_attention",
54
+ "sliding_attention",
55
+ "sliding_attention",
56
+ "sliding_attention",
57
+ "sliding_attention",
58
+ "full_attention",
59
+ "sliding_attention",
60
+ "sliding_attention",
61
+ "sliding_attention",
62
+ "sliding_attention",
63
+ "sliding_attention",
64
+ "full_attention",
65
+ "sliding_attention",
66
+ "sliding_attention",
67
+ "sliding_attention",
68
+ "sliding_attention",
69
+ "sliding_attention",
70
+ "full_attention",
71
+ "sliding_attention",
72
+ "sliding_attention",
73
+ "sliding_attention",
74
+ "sliding_attention",
75
+ "sliding_attention",
76
+ "full_attention",
77
+ "sliding_attention",
78
+ "sliding_attention",
79
+ "sliding_attention",
80
+ "sliding_attention",
81
+ "sliding_attention",
82
+ "full_attention",
83
+ "sliding_attention",
84
+ "sliding_attention",
85
+ "sliding_attention",
86
+ "sliding_attention",
87
+ "sliding_attention",
88
+ "full_attention",
89
+ "sliding_attention",
90
+ "sliding_attention",
91
+ "sliding_attention",
92
+ "sliding_attention",
93
+ "sliding_attention",
94
+ "full_attention",
95
+ "sliding_attention",
96
+ "sliding_attention",
97
+ "sliding_attention",
98
+ "sliding_attention",
99
+ "sliding_attention",
100
+ "full_attention",
101
+ "sliding_attention",
102
+ "sliding_attention"
103
+ ],
104
+ "max_position_embeddings": 131072,
105
+ "model_type": "gemma3_text",
106
+ "num_attention_heads": 32,
107
+ "num_hidden_layers": 62,
108
+ "num_key_value_heads": 16,
109
+ "query_pre_attn_scalar": 168,
110
+ "rms_norm_eps": 1e-06,
111
+ "rope_local_base_freq": 10000.0,
112
+ "rope_scaling": {
113
+ "factor": 8.0,
114
+ "rope_type": "linear"
115
+ },
116
+ "rope_theta": 1000000.0,
117
+ "sliding_window": 1024,
118
+ "use_cache": true,
119
+ "vocab_size": 262208
120
+ },
121
+ "torch_dtype": "bfloat16",
122
+ "transformers_version": "4.53.1",
123
+ "vision_config": {
124
+ "attention_dropout": 0.0,
125
+ "hidden_act": "gelu_pytorch_tanh",
126
+ "hidden_size": 1152,
127
+ "image_size": 896,
128
+ "intermediate_size": 4304,
129
+ "layer_norm_eps": 1e-06,
130
+ "model_type": "siglip_vision_model",
131
+ "num_attention_heads": 16,
132
+ "num_channels": 3,
133
+ "num_hidden_layers": 27,
134
+ "patch_size": 14,
135
+ "vision_use_head": false
136
+ }
137
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05fe47a8686606ec015b171552cb41b8f901c3e15324e41220371e8a3b87348f
3
+ size 4995850722
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9993d1ffe1816e2d7f23a3a98f5ebc5ac306027e08e2f33bc95fd5723fec28de
3
+ size 4971133795
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8923856d65a49633c0c430851f3e7e9633cabb4264dddbb6a1bf63398f289c15
3
+ size 4959771503
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:659b8676bd886bd71b3fa103becfe8931de5c10c190b6ca089d97323fdb5cb57
3
+ size 1337988872
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
3
+ size 33384568
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff