willopcbeta commited on
Commit
85fcde2
·
1 Parent(s): 554df88

Create README.md (#6)

Browse files

- Create README.md (e75c439d269f7f1440757ec68ae08d3ae93c76d6)

Files changed (1) hide show
  1. README.md +502 -0
README.md ADDED
@@ -0,0 +1,502 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - google/gemma-3-4b-it
4
+ - RLLab/gemma-3-4b-text-it
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ license: apache-2.0
9
+ language:
10
+ - multilingual
11
+ library_name: transformers.js
12
+ ---
13
+
14
+ # Text generation only.
15
+
16
+ This model can only produce output by using a vision removal algorithm, and as a result, there is also some loss in accuracy.
17
+ Minimum required version of transformers.js: 4.0.0-next.2
18
+
19
+ # Export using onnxruntime-genai:
20
+ ```--precision int4 --execution_provider webgpu --extra_options int4_accuracy_level=3 int4_block_size=128 int4_algo_config=rtn enable_webgpu_graph=true```
21
+ Source:[RLLab/gemma-3-4b-text-it](https://huggingface.co/RLLab/gemma-3-4b-text-it)
22
+
23
+ ### Description
24
+
25
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
26
+ built from the same research and technology used to create the Gemini models.
27
+ Gemma 3 models are multimodal, handling text and image input and generating text
28
+ output, with open weights for both pre-trained variants and instruction-tuned
29
+ variants. Gemma 3 has a large, 128K context window, multilingual support in over
30
+ 140 languages, and is available in more sizes than previous versions. Gemma 3
31
+ models are well-suited for a variety of text generation and image understanding
32
+ tasks, including question answering, summarization, and reasoning. Their
33
+ relatively small size makes it possible to deploy them in environments with
34
+ limited resources such as laptops, desktops or your own cloud infrastructure,
35
+ democratizing access to state of the art AI models and helping foster innovation
36
+ for everyone.
37
+
38
+ ### Inputs and outputs
39
+
40
+ - **Input:**
41
+ - Text string, such as a question, a prompt, or a document to be summarized
42
+ - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
43
+ each
44
+ - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
45
+ 32K tokens for the 1B size
46
+
47
+ - **Output:**
48
+ - Generated text in response to the input, such as an answer to a
49
+ question, analysis of image content, or a summary of a document
50
+ - Total output context of 8192 tokens
51
+
52
+ ### Usage
53
+ With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
54
+
55
+ ```python
56
+ messages = [
57
+ {
58
+ "role": "system",
59
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
60
+ },
61
+ {
62
+ "role": "user",
63
+ "content": [
64
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
65
+ {"type": "text", "text": "What animal is on the candy?"}
66
+ ]
67
+ }
68
+ ]
69
+
70
+ output = pipe(text=messages, max_new_tokens=200)
71
+ print(output[0]["generated_text"][-1]["content"])
72
+ # Okay, let's take a look!
73
+ # Based on the image, the animal on the candy is a **turtle**.
74
+ # You can see the shell shape and the head and legs.
75
+ ```
76
+
77
+ #### Running the model on a single/multi GPU
78
+
79
+ ```python
80
+ # pip install accelerate
81
+
82
+ from transformers import AutoProcessor, Gemma3ForConditionalGeneration
83
+ from PIL import Image
84
+ import requests
85
+ import torch
86
+
87
+ model_id = "google/gemma-3-4b-it"
88
+
89
+ model = Gemma3ForConditionalGeneration.from_pretrained(
90
+ model_id, device_map="auto"
91
+ ).eval()
92
+
93
+ processor = AutoProcessor.from_pretrained(model_id)
94
+
95
+ messages = [
96
+ {
97
+ "role": "system",
98
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
99
+ },
100
+ {
101
+ "role": "user",
102
+ "content": [
103
+ {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
104
+ {"type": "text", "text": "Describe this image in detail."}
105
+ ]
106
+ }
107
+ ]
108
+
109
+ inputs = processor.apply_chat_template(
110
+ messages, add_generation_prompt=True, tokenize=True,
111
+ return_dict=True, return_tensors="pt"
112
+ ).to(model.device, dtype=torch.bfloat16)
113
+
114
+ input_len = inputs["input_ids"].shape[-1]
115
+
116
+ with torch.inference_mode():
117
+ generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
118
+ generation = generation[0][input_len:]
119
+
120
+ decoded = processor.decode(generation, skip_special_tokens=True)
121
+ print(decoded)
122
+
123
+ # **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
124
+ # focusing on a cluster of pink cosmos flowers and a busy bumblebee.
125
+ # It has a slightly soft, natural feel, likely captured in daylight.
126
+ ```
127
+
128
+
129
+ ### Citation
130
+
131
+ ```none
132
+ @article{gemma_2025,
133
+ title={Gemma 3},
134
+ url={https://goo.gle/Gemma3Report},
135
+ publisher={Kaggle},
136
+ author={Gemma Team},
137
+ year={2025}
138
+ }
139
+ ```
140
+
141
+ ## Model Data
142
+
143
+ Data used for model training and how the data was processed.
144
+
145
+ ### Training Dataset
146
+
147
+ These models were trained on a dataset of text data that includes a wide variety
148
+ of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
149
+ trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
150
+ 1B with 2 trillion tokens. Here are the key components:
151
+
152
+ - Web Documents: A diverse collection of web text ensures the model is
153
+ exposed to a broad range of linguistic styles, topics, and vocabulary. The
154
+ training dataset includes content in over 140 languages.
155
+ - Code: Exposing the model to code helps it to learn the syntax and
156
+ patterns of programming languages, which improves its ability to generate
157
+ code and understand code-related questions.
158
+ - Mathematics: Training on mathematical text helps the model learn logical
159
+ reasoning, symbolic representation, and to address mathematical queries.
160
+ - Images: A wide range of images enables the model to perform image
161
+ analysis and visual data extraction tasks.
162
+
163
+ The combination of these diverse data sources is crucial for training a powerful
164
+ multimodal model that can handle a wide variety of different tasks and data
165
+ formats.
166
+
167
+ ### Data Preprocessing
168
+
169
+ Here are the key data cleaning and filtering methods applied to the training
170
+ data:
171
+
172
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
173
+ was applied at multiple stages in the data preparation process to ensure
174
+ the exclusion of harmful and illegal content.
175
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
176
+ safe and reliable, automated techniques were used to filter out certain
177
+ personal information and other sensitive data from training sets.
178
+ - Additional methods: Filtering based on content quality and safety in
179
+ line with [our policies][safety-policies].
180
+
181
+ ## Implementation Information
182
+
183
+ Details about the model internals.
184
+
185
+ ### Hardware
186
+
187
+ Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
188
+ TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
189
+ computational power. TPUs, designed specifically for matrix operations common in
190
+ machine learning, offer several advantages in this domain:
191
+
192
+ - Performance: TPUs are specifically designed to handle the massive
193
+ computations involved in training VLMs. They can speed up training
194
+ considerably compared to CPUs.
195
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
196
+ allowing for the handling of large models and batch sizes during training.
197
+ This can lead to better model quality.
198
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
199
+ solution for handling the growing complexity of large foundation models.
200
+ You can distribute training across multiple TPU devices for faster and more
201
+ efficient processing.
202
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
203
+ cost-effective solution for training large models compared to CPU-based
204
+ infrastructure, especially when considering the time and resources saved
205
+ due to faster training.
206
+ - These advantages are aligned with
207
+ [Google's commitments to operate sustainably][sustainability].
208
+
209
+ ### Software
210
+
211
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
212
+
213
+ JAX allows researchers to take advantage of the latest generation of hardware,
214
+ including TPUs, for faster and more efficient training of large models. ML
215
+ Pathways is Google's latest effort to build artificially intelligent systems
216
+ capable of generalizing across multiple tasks. This is specially suitable for
217
+ foundation models, including large language models like these ones.
218
+
219
+ Together, JAX and ML Pathways are used as described in the
220
+ [paper about the Gemini family of models][gemini-2-paper]; *"the 'single
221
+ controller' programming model of Jax and Pathways allows a single Python
222
+ process to orchestrate the entire training run, dramatically simplifying the
223
+ development workflow."*
224
+
225
+ ## Evaluation
226
+
227
+ Model evaluation metrics and results.
228
+
229
+ ### Benchmark Results
230
+
231
+ These models were evaluated against a large collection of different datasets and
232
+ metrics to cover different aspects of text generation:
233
+
234
+ #### Reasoning and factuality
235
+
236
+ | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
237
+ | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
238
+ | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
239
+ | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
240
+ | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
241
+ | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
242
+ | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
243
+ | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
244
+ | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
245
+ | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
246
+ | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
247
+ | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
248
+ | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
249
+
250
+ [hellaswag]: https://arxiv.org/abs/1905.07830
251
+ [boolq]: https://arxiv.org/abs/1905.10044
252
+ [piqa]: https://arxiv.org/abs/1911.11641
253
+ [socialiqa]: https://arxiv.org/abs/1904.09728
254
+ [triviaqa]: https://arxiv.org/abs/1705.03551
255
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
256
+ [arc]: https://arxiv.org/abs/1911.01547
257
+ [winogrande]: https://arxiv.org/abs/1907.10641
258
+ [bbh]: https://paperswithcode.com/dataset/bbh
259
+ [drop]: https://arxiv.org/abs/1903.00161
260
+
261
+ #### STEM and code
262
+
263
+ | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
264
+ | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
265
+ | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
266
+ | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
267
+ | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
268
+ | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
269
+ | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
270
+ | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
271
+ | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
272
+ | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
273
+
274
+ [mmlu]: https://arxiv.org/abs/2009.03300
275
+ [agieval]: https://arxiv.org/abs/2304.06364
276
+ [math]: https://arxiv.org/abs/2103.03874
277
+ [gsm8k]: https://arxiv.org/abs/2110.14168
278
+ [gpqa]: https://arxiv.org/abs/2311.12022
279
+ [mbpp]: https://arxiv.org/abs/2108.07732
280
+ [humaneval]: https://arxiv.org/abs/2107.03374
281
+
282
+ #### Multilingual
283
+
284
+ | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
285
+ | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
286
+ | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
287
+ | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
288
+ | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
289
+ | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
290
+ | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
291
+ | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
292
+ | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
293
+
294
+ [mgsm]: https://arxiv.org/abs/2210.03057
295
+ [flores]: https://arxiv.org/abs/2106.03193
296
+ [xquad]: https://arxiv.org/abs/1910.11856v3
297
+ [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
298
+ [wmt24pp]: https://arxiv.org/abs/2502.12404v1
299
+ [eclektic]: https://arxiv.org/abs/2502.21228
300
+ [indicgenbench]: https://arxiv.org/abs/2404.16816
301
+
302
+ #### Multimodal
303
+
304
+ | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
305
+ | ------------------------------ |:-------------:|:--------------:|:--------------:|
306
+ | [COCOcap][coco-cap] | 102 | 111 | 116 |
307
+ | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
308
+ | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
309
+ | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
310
+ | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
311
+ | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
312
+ | [ReMI][remi] | 27.3 | 38.5 | 44.8 |
313
+ | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
314
+ | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
315
+ | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
316
+ | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
317
+ | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
318
+ | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
319
+ | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
320
+ | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
321
+
322
+ [coco-cap]: https://cocodataset.org/#home
323
+ [docvqa]: https://www.docvqa.org/
324
+ [info-vqa]: https://arxiv.org/abs/2104.12756
325
+ [mmmu]: https://arxiv.org/abs/2311.16502
326
+ [textvqa]: https://textvqa.org/
327
+ [realworldqa]: https://paperswithcode.com/dataset/realworldqa
328
+ [remi]: https://arxiv.org/html/2406.09175v1
329
+ [ai2d]: https://allenai.org/data/diagrams
330
+ [chartqa]: https://arxiv.org/abs/2203.10244
331
+ [vqav2]: https://visualqa.org/index.html
332
+ [blinkvqa]: https://arxiv.org/abs/2404.12390
333
+ [okvqa]: https://okvqa.allenai.org/
334
+ [tallyqa]: https://arxiv.org/abs/1810.12440
335
+ [ss-vqa]: https://arxiv.org/abs/1908.02660
336
+ [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
337
+
338
+ ## Ethics and Safety
339
+
340
+ Ethics and safety evaluation approach and results.
341
+
342
+ ### Evaluation Approach
343
+
344
+ Our evaluation methods include structured evaluations and internal red-teaming
345
+ testing of relevant content policies. Red-teaming was conducted by a number of
346
+ different teams, each with different goals and human evaluation metrics. These
347
+ models were evaluated against a number of different categories relevant to
348
+ ethics and safety, including:
349
+
350
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
351
+ covering child safety policies, including child sexual abuse and
352
+ exploitation.
353
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
354
+ covering safety policies including, harassment, violence and gore, and hate
355
+ speech.
356
+ - **Representational Harms**: Evaluation of text-to-text and image to text
357
+ prompts covering safety policies including bias, stereotyping, and harmful
358
+ associations or inaccuracies.
359
+
360
+ In addition to development level evaluations, we conduct "assurance
361
+ evaluations" which are our 'arms-length' internal evaluations for responsibility
362
+ governance decision making. They are conducted separately from the model
363
+ development team, to inform decision making about release. High level findings
364
+ are fed back to the model team, but prompt sets are held-out to prevent
365
+ overfitting and preserve the results' ability to inform decision making.
366
+ Assurance evaluation results are reported to our Responsibility & Safety Council
367
+ as part of release review.
368
+
369
+ ### Evaluation Results
370
+
371
+ For all areas of safety testing, we saw major improvements in the categories of
372
+ child safety, content safety, and representational harms relative to previous
373
+ Gemma models. All testing was conducted without safety filters to evaluate the
374
+ model capabilities and behaviors. For both text-to-text and image-to-text, and
375
+ across all model sizes, the model produced minimal policy violations, and showed
376
+ significant improvements over previous Gemma models' performance with respect
377
+ to ungrounded inferences. A limitation of our evaluations was they included only
378
+ English language prompts.
379
+
380
+ ## Usage and Limitations
381
+
382
+ These models have certain limitations that users should be aware of.
383
+
384
+ ### Intended Usage
385
+
386
+ Open vision-language models (VLMs) models have a wide range of applications
387
+ across various industries and domains. The following list of potential uses is
388
+ not comprehensive. The purpose of this list is to provide contextual information
389
+ about the possible use-cases that the model creators considered as part of model
390
+ training and development.
391
+
392
+ - Content Creation and Communication
393
+ - Text Generation: These models can be used to generate creative text
394
+ formats such as poems, scripts, code, marketing copy, and email drafts.
395
+ - Chatbots and Conversational AI: Power conversational interfaces
396
+ for customer service, virtual assistants, or interactive applications.
397
+ - Text Summarization: Generate concise summaries of a text corpus,
398
+ research papers, or reports.
399
+ - Image Data Extraction: These models can be used to extract,
400
+ interpret, and summarize visual data for text communications.
401
+ - Research and Education
402
+ - Natural Language Processing (NLP) and VLM Research: These
403
+ models can serve as a foundation for researchers to experiment with VLM
404
+ and NLP techniques, develop algorithms, and contribute to the
405
+ advancement of the field.
406
+ - Language Learning Tools: Support interactive language learning
407
+ experiences, aiding in grammar correction or providing writing practice.
408
+ - Knowledge Exploration: Assist researchers in exploring large
409
+ bodies of text by generating summaries or answering questions about
410
+ specific topics.
411
+
412
+ ### Limitations
413
+
414
+ - Training Data
415
+ - The quality and diversity of the training data significantly
416
+ influence the model's capabilities. Biases or gaps in the training data
417
+ can lead to limitations in the model's responses.
418
+ - The scope of the training dataset determines the subject areas
419
+ the model can handle effectively.
420
+ - Context and Task Complexity
421
+ - Models are better at tasks that can be framed with clear
422
+ prompts and instructions. Open-ended or highly complex tasks might be
423
+ challenging.
424
+ - A model's performance can be influenced by the amount of context
425
+ provided (longer context generally leads to better outputs, up to a
426
+ certain point).
427
+ - Language Ambiguity and Nuance
428
+ - Natural language is inherently complex. Models might struggle
429
+ to grasp subtle nuances, sarcasm, or figurative language.
430
+ - Factual Accuracy
431
+ - Models generate responses based on information they learned
432
+ from their training datasets, but they are not knowledge bases. They
433
+ may generate incorrect or outdated factual statements.
434
+ - Common Sense
435
+ - Models rely on statistical patterns in language. They might
436
+ lack the ability to apply common sense reasoning in certain situations.
437
+
438
+ ### Ethical Considerations and Risks
439
+
440
+ The development of vision-language models (VLMs) raises several ethical
441
+ concerns. In creating an open model, we have carefully considered the following:
442
+
443
+ - Bias and Fairness
444
+ - VLMs trained on large-scale, real-world text and image data can
445
+ reflect socio-cultural biases embedded in the training material. These
446
+ models underwent careful scrutiny, input data pre-processing described
447
+ and posterior evaluations reported in this card.
448
+ - Misinformation and Misuse
449
+ - VLMs can be misused to generate text that is false, misleading,
450
+ or harmful.
451
+ - Guidelines are provided for responsible use with the model, see the
452
+ [Responsible Generative AI Toolkit][rai-toolkit].
453
+ - Transparency and Accountability:
454
+ - This model card summarizes details on the models' architecture,
455
+ capabilities, limitations, and evaluation processes.
456
+ - A responsibly developed open model offers the opportunity to
457
+ share innovation by making VLM technology accessible to developers and
458
+ researchers across the AI ecosystem.
459
+
460
+ Risks identified and mitigations:
461
+
462
+ - **Perpetuation of biases**: It's encouraged to perform continuous
463
+ monitoring (using evaluation metrics, human review) and the exploration of
464
+ de-biasing techniques during model training, fine-tuning, and other use
465
+ cases.
466
+ - **Generation of harmful content**: Mechanisms and guidelines for content
467
+ safety are essential. Developers are encouraged to exercise caution and
468
+ implement appropriate content safety safeguards based on their specific
469
+ product policies and application use cases.
470
+ - **Misuse for malicious purposes**: Technical limitations and developer
471
+ and end-user education can help mitigate against malicious applications of
472
+ VLMs. Educational resources and reporting mechanisms for users to flag
473
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
474
+ [Gemma Prohibited Use Policy][prohibited-use].
475
+ - **Privacy violations**: Models were trained on data filtered for removal
476
+ of certain personal information and other sensitive data. Developers are
477
+ encouraged to adhere to privacy regulations with privacy-preserving
478
+ techniques.
479
+
480
+ ### Benefits
481
+
482
+ At the time of release, this family of models provides high-performance open
483
+ vision-language model implementations designed from the ground up for
484
+ responsible AI development compared to similarly sized models.
485
+
486
+ Using the benchmark evaluation metrics described in this document, these models
487
+ have shown to provide superior performance to other, comparably-sized open model
488
+ alternatives.
489
+
490
+ [g3-tech-report]: https://goo.gle/Gemma3Report
491
+ [rai-toolkit]: https://ai.google.dev/responsible
492
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
493
+ [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
494
+ [terms]: https://ai.google.dev/gemma/terms
495
+ [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
496
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
497
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
498
+ [sustainability]: https://sustainability.google/operating-sustainably/
499
+ [jax]: https://github.com/jax-ml/jax
500
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
501
+ [sustainability]: https://sustainability.google/operating-sustainably/
502
+ [gemini-2-paper]: https://arxiv.org/abs/2312.11805