rainjay commited on
Commit
ce94184
·
verified ·
1 Parent(s): e059065

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +571 -0
README.md ADDED
@@ -0,0 +1,571 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ extra_gated_heading: Access Gemma on Hugging Face
6
+ extra_gated_prompt: >-
7
+ To access Gemma on Hugging Face, you’re required to review and agree to
8
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
9
+ Face and click below. Requests are processed immediately.
10
+ tags:
11
+ - conversational
12
+ ---
13
+
14
+ # Fork from google/gemma-2-9b-it
15
+
16
+ ## 4-bit Quantization
17
+ ```python
18
+ nf4_config = BitsAndBytesConfig(load_in_4bit=True,
19
+ bnb_4bit_use_double_quant=True,
20
+ bnb_4bit_compute_dtype=torch.bfloat16,
21
+ bnb_4bit_quant_type="nf4")
22
+
23
+
24
+ # Gemma 2 model card
25
+
26
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
27
+
28
+ **Resources and Technical Documentation**:
29
+
30
+ * [Responsible Generative AI Toolkit][rai-toolkit]
31
+ * [Gemma on Kaggle][kaggle-gemma]
32
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma]
33
+
34
+ **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
35
+
36
+ **Authors**: Google
37
+
38
+ ## Model Information
39
+
40
+ Summary description and brief definition of inputs and outputs.
41
+
42
+ ### Description
43
+
44
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
45
+ built from the same research and technology used to create the Gemini models.
46
+ They are text-to-text, decoder-only large language models, available in English,
47
+ with open weights for both pre-trained variants and instruction-tuned variants.
48
+ Gemma models are well-suited for a variety of text generation tasks, including
49
+ question answering, summarization, and reasoning. Their relatively small size
50
+ makes it possible to deploy them in environments with limited resources such as
51
+ a laptop, desktop or your own cloud infrastructure, democratizing access to
52
+ state of the art AI models and helping foster innovation for everyone.
53
+
54
+ ### Usage
55
+
56
+ Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
57
+
58
+
59
+ #### Running the model on a single / multi GPU
60
+
61
+
62
+ ```python
63
+ # pip install accelerate
64
+ from transformers import AutoTokenizer, AutoModelForCausalLM
65
+ import torch
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
68
+ model = AutoModelForCausalLM.from_pretrained(
69
+ "google/gemma-2-9b-it",
70
+ device_map="auto",
71
+ torch_dtype=torch.bfloat16
72
+ )
73
+
74
+ input_text = "Write me a poem about Machine Learning."
75
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
76
+
77
+ outputs = model.generate(**input_ids)
78
+ print(tokenizer.decode(outputs[0]))
79
+ ```
80
+
81
+ <a name="precisions"></a>
82
+ #### Running the model on a GPU using different precisions
83
+
84
+ The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
85
+
86
+ You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
87
+
88
+ * _Using `torch.float16`_
89
+
90
+ ```python
91
+ # pip install accelerate
92
+ from transformers import AutoTokenizer, AutoModelForCausalLM
93
+ import torch
94
+
95
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
96
+ model = AutoModelForCausalLM.from_pretrained(
97
+ "google/gemma-2-9b-it",
98
+ device_map="auto",
99
+ torch_dtype=torch.float16,
100
+ revision="float16",
101
+ )
102
+
103
+ input_text = "Write me a poem about Machine Learning."
104
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
105
+
106
+ outputs = model.generate(**input_ids)
107
+ print(tokenizer.decode(outputs[0]))
108
+ ```
109
+
110
+ * _Using `torch.bfloat16`_
111
+
112
+ ```python
113
+ # pip install accelerate
114
+ from transformers import AutoTokenizer, AutoModelForCausalLM
115
+
116
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
117
+ model = AutoModelForCausalLM.from_pretrained(
118
+ "google/gemma-2-9b-it",
119
+ device_map="auto",
120
+ torch_dtype=torch.bfloat16)
121
+
122
+ input_text = "Write me a poem about Machine Learning."
123
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
124
+
125
+ outputs = model.generate(**input_ids)
126
+ print(tokenizer.decode(outputs[0]))
127
+ ```
128
+
129
+ * _Upcasting to `torch.float32`_
130
+
131
+ ```python
132
+ # pip install accelerate
133
+ from transformers import AutoTokenizer, AutoModelForCausalLM
134
+
135
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
136
+ model = AutoModelForCausalLM.from_pretrained(
137
+ "google/gemma-2-9b-it",
138
+ device_map="auto")
139
+
140
+ input_text = "Write me a poem about Machine Learning."
141
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
142
+
143
+ outputs = model.generate(**input_ids)
144
+ print(tokenizer.decode(outputs[0]))
145
+ ```
146
+
147
+ #### Quantized Versions through `bitsandbytes`
148
+
149
+ * _Using 8-bit precision (int8)_
150
+
151
+ ```python
152
+ # pip install bitsandbytes accelerate
153
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
154
+
155
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
156
+
157
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
158
+ model = AutoModelForCausalLM.from_pretrained(
159
+ "google/gemma-2-9b-it",
160
+ quantization_config=quantization_config)
161
+
162
+ input_text = "Write me a poem about Machine Learning."
163
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
164
+
165
+ outputs = model.generate(**input_ids)
166
+ print(tokenizer.decode(outputs[0]))
167
+ ```
168
+
169
+ * _Using 4-bit precision_
170
+
171
+ ```python
172
+ # pip install bitsandbytes accelerate
173
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
174
+
175
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
176
+
177
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
178
+ model = AutoModelForCausalLM.from_pretrained(
179
+ "google/gemma-2-9b-it",
180
+ quantization_config=quantization_config)
181
+
182
+ input_text = "Write me a poem about Machine Learning."
183
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
184
+
185
+ outputs = model.generate(**input_ids)
186
+ print(tokenizer.decode(outputs[0]))
187
+ ```
188
+
189
+
190
+ #### Other optimizations
191
+
192
+ * _Flash Attention 2_
193
+
194
+ First make sure to install `flash-attn` in your environment `pip install flash-attn`
195
+
196
+ ```diff
197
+ model = AutoModelForCausalLM.from_pretrained(
198
+ model_id,
199
+ torch_dtype=torch.float16,
200
+ + attn_implementation="flash_attention_2"
201
+ ).to(0)
202
+ ```
203
+
204
+ ### Chat Template
205
+
206
+ The instruction-tuned models use a chat template that must be adhered to for conversational use.
207
+ The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
208
+
209
+ Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
210
+
211
+ ```py
212
+ from transformers import AutoTokenizer, AutoModelForCausalLM
213
+ import transformers
214
+ import torch
215
+
216
+ model_id = "google/gemma-2-9b-it"
217
+ dtype = torch.bfloat16
218
+
219
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
220
+ model = AutoModelForCausalLM.from_pretrained(
221
+ model_id,
222
+ device_map="cuda",
223
+ torch_dtype=dtype,)
224
+
225
+ chat = [
226
+ { "role": "user", "content": "Write a hello world program" },
227
+ ]
228
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
229
+ ```
230
+
231
+ At this point, the prompt contains the following text:
232
+
233
+ ```
234
+ <bos><start_of_turn>user
235
+ Write a hello world program<end_of_turn>
236
+ <start_of_turn>model
237
+ ```
238
+
239
+ As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
240
+ (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
241
+ the `<end_of_turn>` token.
242
+
243
+ You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
244
+ chat template.
245
+
246
+ After the prompt is ready, generation can be performed like this:
247
+
248
+ ```py
249
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
250
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
251
+ print(tokenizer.decode(outputs[0]))
252
+ ```
253
+
254
+ ### Inputs and outputs
255
+
256
+ * **Input:** Text string, such as a question, a prompt, or a document to be
257
+ summarized.
258
+ * **Output:** Generated English-language text in response to the input, such
259
+ as an answer to a question, or a summary of a document.
260
+
261
+ ### Citation
262
+
263
+ ```none
264
+ @article{gemma_2024,
265
+ title={Gemma},
266
+ url={https://www.kaggle.com/m/3301},
267
+ DOI={10.34740/KAGGLE/M/3301},
268
+ publisher={Kaggle},
269
+ author={Gemma Team},
270
+ year={2024}
271
+ }
272
+ ```
273
+
274
+ ## Model Data
275
+
276
+ Data used for model training and how the data was processed.
277
+
278
+ ### Training Dataset
279
+
280
+ These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
281
+ Here are the key components:
282
+
283
+ * Web Documents: A diverse collection of web text ensures the model is exposed
284
+ to a broad range of linguistic styles, topics, and vocabulary. Primarily
285
+ English-language content.
286
+ * Code: Exposing the model to code helps it to learn the syntax and patterns of
287
+ programming languages, which improves its ability to generate code or
288
+ understand code-related questions.
289
+ * Mathematics: Training on mathematical text helps the model learn logical
290
+ reasoning, symbolic representation, and to address mathematical queries.
291
+
292
+ The combination of these diverse data sources is crucial for training a powerful
293
+ language model that can handle a wide variety of different tasks and text
294
+ formats.
295
+
296
+ ### Data Preprocessing
297
+
298
+ Here are the key data cleaning and filtering methods applied to the training
299
+ data:
300
+
301
+ * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
302
+ applied at multiple stages in the data preparation process to ensure the
303
+ exclusion of harmful and illegal content.
304
+ * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
305
+ reliable, automated techniques were used to filter out certain personal
306
+ information and other sensitive data from training sets.
307
+ * Additional methods: Filtering based on content quality and safety in line with
308
+ [our policies][safety-policies].
309
+
310
+ ## Implementation Information
311
+
312
+ Details about the model internals.
313
+
314
+ ### Hardware
315
+
316
+ Gemma was trained using the latest generation of
317
+ [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
318
+
319
+ Training large language models requires significant computational power. TPUs,
320
+ designed specifically for matrix operations common in machine learning, offer
321
+ several advantages in this domain:
322
+
323
+ * Performance: TPUs are specifically designed to handle the massive computations
324
+ involved in training LLMs. They can speed up training considerably compared to
325
+ CPUs.
326
+ * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
327
+ for the handling of large models and batch sizes during training. This can
328
+ lead to better model quality.
329
+ * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
330
+ handling the growing complexity of large foundation models. You can distribute
331
+ training across multiple TPU devices for faster and more efficient processing.
332
+ * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
333
+ solution for training large models compared to CPU-based infrastructure,
334
+ especially when considering the time and resources saved due to faster
335
+ training.
336
+ * These advantages are aligned with
337
+ [Google's commitments to operate sustainably][sustainability].
338
+
339
+ ### Software
340
+
341
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
342
+
343
+ JAX allows researchers to take advantage of the latest generation of hardware,
344
+ including TPUs, for faster and more efficient training of large models.
345
+
346
+ ML Pathways is Google's latest effort to build artificially intelligent systems
347
+ capable of generalizing across multiple tasks. This is specially suitable for
348
+ [foundation models][foundation-models], including large language models like
349
+ these ones.
350
+
351
+ Together, JAX and ML Pathways are used as described in the
352
+ [paper about the Gemini family of models][gemini-2-paper]; "the 'single
353
+ controller' programming model of Jax and Pathways allows a single Python
354
+ process to orchestrate the entire training run, dramatically simplifying the
355
+ development workflow."
356
+
357
+ ## Evaluation
358
+
359
+ Model evaluation metrics and results.
360
+
361
+ ### Benchmark Results
362
+
363
+ These models were evaluated against a large collection of different datasets and
364
+ metrics to cover different aspects of text generation:
365
+
366
+ | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
367
+ | ------------------------------ | ------------- | ----------- | ------------ |
368
+ | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
369
+ | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
370
+ | [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
371
+ | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
372
+ | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
373
+ | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
374
+ | [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
375
+ | [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
376
+ | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
377
+ | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
378
+ | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
379
+ | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
380
+ | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
381
+ | [MATH][math] | 4-shot | 36.6 | 42.3 |
382
+ | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
383
+ | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
384
+ | ------------------------------ | ------------- | ----------- | ------------ |
385
+
386
+ ## Ethics and Safety
387
+
388
+ Ethics and safety evaluation approach and results.
389
+
390
+ ### Evaluation Approach
391
+
392
+ Our evaluation methods include structured evaluations and internal red-teaming
393
+ testing of relevant content policies. Red-teaming was conducted by a number of
394
+ different teams, each with different goals and human evaluation metrics. These
395
+ models were evaluated against a number of different categories relevant to
396
+ ethics and safety, including:
397
+
398
+ * Text-to-Text Content Safety: Human evaluation on prompts covering safety
399
+ policies including child sexual abuse and exploitation, harassment, violence
400
+ and gore, and hate speech.
401
+ * Text-to-Text Representational Harms: Benchmark against relevant academic
402
+ datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
403
+ * Memorization: Automated evaluation of memorization of training data, including
404
+ the risk of personally identifiable information exposure.
405
+ * Large-scale harm: Tests for "dangerous capabilities," such as chemical,
406
+ biological, radiological, and nuclear (CBRN) risks.
407
+
408
+ ### Evaluation Results
409
+
410
+ The results of ethics and safety evaluations are within acceptable thresholds
411
+ for meeting [internal policies][safety-policies] for categories such as child
412
+ safety, content safety, representational harms, memorization, large-scale harms.
413
+ On top of robust internal evaluations, the results of well-known safety
414
+ benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
415
+ are shown here.
416
+
417
+ #### Gemma 2.0
418
+
419
+ | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
420
+ | ------------------------ | ------------- | --------------- | ---------------- |
421
+ | [RealToxicity][realtox] | average | 8.25 | 8.84 |
422
+ | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
423
+ | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
424
+ | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
425
+ | [Winogender][winogender] | top-1 | 79.17 | 77.22 |
426
+ | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
427
+ | [Winobias 1_2][winobias] | | 78.09 | 81.94 |
428
+ | [Winobias 2_2][winobias] | | 95.32 | 97.22 |
429
+ | [Toxigen][toxigen] | | 39.30 | 38.42 |
430
+ | ------------------------ | ------------- | --------------- | ---------------- |
431
+
432
+ ## Usage and Limitations
433
+
434
+ These models have certain limitations that users should be aware of.
435
+
436
+ ### Intended Usage
437
+
438
+ Open Large Language Models (LLMs) have a wide range of applications across
439
+ various industries and domains. The following list of potential uses is not
440
+ comprehensive. The purpose of this list is to provide contextual information
441
+ about the possible use-cases that the model creators considered as part of model
442
+ training and development.
443
+
444
+ * Content Creation and Communication
445
+ * Text Generation: These models can be used to generate creative text formats
446
+ such as poems, scripts, code, marketing copy, and email drafts.
447
+ * Chatbots and Conversational AI: Power conversational interfaces for customer
448
+ service, virtual assistants, or interactive applications.
449
+ * Text Summarization: Generate concise summaries of a text corpus, research
450
+ papers, or reports.
451
+ * Research and Education
452
+ * Natural Language Processing (NLP) Research: These models can serve as a
453
+ foundation for researchers to experiment with NLP techniques, develop
454
+ algorithms, and contribute to the advancement of the field.
455
+ * Language Learning Tools: Support interactive language learning experiences,
456
+ aiding in grammar correction or providing writing practice.
457
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
458
+ by generating summaries or answering questions about specific topics.
459
+
460
+ ### Limitations
461
+
462
+ * Training Data
463
+ * The quality and diversity of the training data significantly influence the
464
+ model's capabilities. Biases or gaps in the training data can lead to
465
+ limitations in the model's responses.
466
+ * The scope of the training dataset determines the subject areas the model can
467
+ handle effectively.
468
+ * Context and Task Complexity
469
+ * LLMs are better at tasks that can be framed with clear prompts and
470
+ instructions. Open-ended or highly complex tasks might be challenging.
471
+ * A model's performance can be influenced by the amount of context provided
472
+ (longer context generally leads to better outputs, up to a certain point).
473
+ * Language Ambiguity and Nuance
474
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
475
+ nuances, sarcasm, or figurative language.
476
+ * Factual Accuracy
477
+ * LLMs generate responses based on information they learned from their
478
+ training datasets, but they are not knowledge bases. They may generate
479
+ incorrect or outdated factual statements.
480
+ * Common Sense
481
+ * LLMs rely on statistical patterns in language. They might lack the ability
482
+ to apply common sense reasoning in certain situations.
483
+
484
+ ### Ethical Considerations and Risks
485
+
486
+ The development of large language models (LLMs) raises several ethical concerns.
487
+ In creating an open model, we have carefully considered the following:
488
+
489
+ * Bias and Fairness
490
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
491
+ biases embedded in the training material. These models underwent careful
492
+ scrutiny, input data pre-processing described and posterior evaluations
493
+ reported in this card.
494
+ * Misinformation and Misuse
495
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
496
+ * Guidelines are provided for responsible use with the model, see the
497
+ [Responsible Generative AI Toolkit][rai-toolkit].
498
+ * Transparency and Accountability:
499
+ * This model card summarizes details on the models' architecture,
500
+ capabilities, limitations, and evaluation processes.
501
+ * A responsibly developed open model offers the opportunity to share
502
+ innovation by making LLM technology accessible to developers and researchers
503
+ across the AI ecosystem.
504
+
505
+ Risks identified and mitigations:
506
+
507
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
508
+ (using evaluation metrics, human review) and the exploration of de-biasing
509
+ techniques during model training, fine-tuning, and other use cases.
510
+ * Generation of harmful content: Mechanisms and guidelines for content safety
511
+ are essential. Developers are encouraged to exercise caution and implement
512
+ appropriate content safety safeguards based on their specific product policies
513
+ and application use cases.
514
+ * Misuse for malicious purposes: Technical limitations and developer and
515
+ end-user education can help mitigate against malicious applications of LLMs.
516
+ Educational resources and reporting mechanisms for users to flag misuse are
517
+ provided. Prohibited uses of Gemma models are outlined in the
518
+ [Gemma Prohibited Use Policy][prohibited-use].
519
+ * Privacy violations: Models were trained on data filtered for removal of PII
520
+ (Personally Identifiable Information). Developers are encouraged to adhere to
521
+ privacy regulations with privacy-preserving techniques.
522
+
523
+ ### Benefits
524
+
525
+ At the time of release, this family of models provides high-performance open
526
+ large language model implementations designed from the ground up for Responsible
527
+ AI development compared to similarly sized models.
528
+
529
+ Using the benchmark evaluation metrics described in this document, these models
530
+ have shown to provide superior performance to other, comparably-sized open model
531
+ alternatives.
532
+
533
+ [rai-toolkit]: https://ai.google.dev/responsible
534
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
535
+ [terms]: https://ai.google.dev/gemma/terms
536
+ [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
537
+ [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
538
+ [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
539
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
540
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
541
+ [sustainability]: https://sustainability.google/operating-sustainably/
542
+ [jax]: https://github.com/google/jax
543
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
544
+ [sustainability]: https://sustainability.google/operating-sustainably/
545
+ [foundation-models]: https://ai.google/discover/foundation-models/
546
+ [gemini-2-paper]: https://goo.gle/gemma2report
547
+ [mmlu]: https://arxiv.org/abs/2009.03300
548
+ [hellaswag]: https://arxiv.org/abs/1905.07830
549
+ [piqa]: https://arxiv.org/abs/1911.11641
550
+ [socialiqa]: https://arxiv.org/abs/1904.09728
551
+ [boolq]: https://arxiv.org/abs/1905.10044
552
+ [winogrande]: https://arxiv.org/abs/1907.10641
553
+ [commonsenseqa]: https://arxiv.org/abs/1811.00937
554
+ [openbookqa]: https://arxiv.org/abs/1809.02789
555
+ [arc]: https://arxiv.org/abs/1911.01547
556
+ [triviaqa]: https://arxiv.org/abs/1705.03551
557
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
558
+ [humaneval]: https://arxiv.org/abs/2107.03374
559
+ [mbpp]: https://arxiv.org/abs/2108.07732
560
+ [gsm8k]: https://arxiv.org/abs/2110.14168
561
+ [realtox]: https://arxiv.org/abs/2009.11462
562
+ [bold]: https://arxiv.org/abs/2101.11718
563
+ [crows]: https://aclanthology.org/2020.emnlp-main.154/
564
+ [bbq]: https://arxiv.org/abs/2110.08193v2
565
+ [winogender]: https://arxiv.org/abs/1804.09301
566
+ [truthfulqa]: https://arxiv.org/abs/2109.07958
567
+ [winobias]: https://arxiv.org/abs/1804.06876
568
+ [math]: https://arxiv.org/abs/2103.03874
569
+ [agieval]: https://arxiv.org/abs/2304.06364
570
+ [big-bench]: https://arxiv.org/abs/2206.04615
571
+ [toxigen]: https://arxiv.org/abs/2203.09509