Guilherme34 commited on
Commit
bce02a0
·
verified ·
1 Parent(s): 1448375

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +575 -0
README.md ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ tags:
4
+ - gemma3
5
+ - gemma
6
+ - google
7
+ - functiongemma
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ base_model:
11
+ - google/functiongemma-270m-it
12
+ ---
13
+ # Read our How to [Run & Fine-tune Guide!](https://docs.unsloth.ai/models/functiongemma)
14
+
15
+ <div>
16
+ <p style="margin-top: 0;margin-bottom: 0;">
17
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
18
+ </p>
19
+ <div style="display: flex; gap: 5px; align-items: center; ">
20
+ <a href="https://github.com/unslothai/unsloth/">
21
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
22
+ </a>
23
+ <a href="https://discord.gg/unsloth">
24
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
25
+ </a>
26
+ <a href="https://docs.unsloth.ai/models/functiongemma">
27
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
28
+ </a>
29
+ </div>
30
+ </div>
31
+
32
+ # FunctionGemma model card
33
+
34
+ **Model Page**: [FunctionGemma](https://ai.google.dev/gemma/docs/functiongemma)
35
+
36
+ **Resources and Technical Documentation**:
37
+
38
+ - [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
39
+ - [FunctionGemma on Kaggle](https://www.kaggle.com/models/google/functiongemma/)
40
+ - Function[Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/functiongemma)
41
+
42
+ **Terms of Use**: [Terms](https://ai.google.dev/gemma/terms)\
43
+ **Authors**: Google DeepMind
44
+
45
+ ## Model Information
46
+
47
+ Summary description and brief definition of inputs and outputs.
48
+
49
+ ### Description
50
+
51
+ > [!Note]
52
+ > FunctionGemma is intended to be fine-tuned for your specific function-calling task, including multi-turn use cases.
53
+
54
+
55
+ FunctionGemma is a lightweight, open model from Google, built as a foundation
56
+ for creating your own specialized function calling models. FunctionGemma is not
57
+ intended for use as a direct dialogue model, and is designed to be highly
58
+ performant after further fine-tuning, as is typical of models this size. Built
59
+ on the Gemma 3 270M model and with the same research and technology used to
60
+ create the Gemini models, FunctionGemma has been trained specifically for
61
+ function calling. The model has the same architecture as Gemma 3, but uses a
62
+ different chat format. The model is well suited for text-only function calling.
63
+ The uniquely small size makes it possible to deploy in environments with limited
64
+ resources such as laptops, desktops or your own cloud infrastructure,
65
+ democratizing access to state of the art AI models and helping foster innovation
66
+ for everyone. Furthermore, akin to the base Gemma 270M, the model has been
67
+ optimized to be extremely versatile, performant on a variety of hardware in
68
+ single turn scenarios, but should be finetuned on single turn or multiturn task
69
+ specific data to achieve best accuracy in specific domains.
70
+ To demonstrate how specializing the 270M parameter model can achieve high
71
+ performance on specific agentic workflows, we have highlighted two use cases in
72
+ the
73
+ [Google AI Edge Gallery app](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&pcampaignid=web_share).
74
+
75
+ - **Tiny Garden:** A model fine-tuned to power a voice-controlled
76
+ interactive game. It handles game logic to manage a virtual plot of land,
77
+ decomposing commands like "Plant sunflowers in the top row" and "Water the
78
+ flowers in plots 1 and 2" into app-specific functions (e.g., plant_seed,
79
+ water_plots) and coordinate targets. This demonstrates the model's capacity
80
+ to drive custom app mechanics without server connectivity.
81
+
82
+ - **Mobile Actions:** To empower developers to build their own expert
83
+ agents, we have published [a
84
+ dataset](https://huggingface.co/datasets/google/mobile-actions) and
85
+ [fine-tuning recipe](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb)
86
+ to demonstrate fine-tuning FunctionGemma. It translates user inputs (e.g.,
87
+ "Create a calendar event for lunch," "Turn on the flashlight") into
88
+ function calls that trigger Android OS system tools. This interactive
89
+ notebook demonstrates how to take the base FunctionGemma model and build a
90
+ "Mobile Actions" fine tune from scratch for use in the
91
+ [Google AI Edge gallery app](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&pcampaignid=web_share).
92
+ This use case demonstrates the model's ability to act as an offline,
93
+ private agent for personal device tasks.
94
+
95
+ ### Inputs and outputs
96
+
97
+ - **Input:**
98
+ - Text string, such as a question, a prompt, or a document to be
99
+ summarized
100
+ - Total input context of 32K tokens
101
+ - **Output:**
102
+ - Generated text in response to the input, such as an answer to a
103
+ question, or a summary of a document
104
+ - Total output context up to 32K tokens per request, subtracting
105
+ the request input tokens
106
+
107
+ ### Basic Usage
108
+
109
+ The following is a code example of how to use FunctionGemma to generate a function call from a JSON definition using the Hugging Face Transformers library.
110
+
111
+ First install the dependencies:
112
+
113
+ ```sh
114
+ $ pip install torch
115
+ $ pip install transformers
116
+ ```
117
+
118
+ Then load the model and the processor using Transformers:
119
+
120
+ ```python
121
+ from transformers import AutoProcessor, AutoModelForCausalLM
122
+
123
+ processor = AutoProcessor.from_pretrained("google/functiongemma-270m-it", device_map="auto")
124
+ model = AutoModelForCausalLM.from_pretrained("google/functiongemma-270m-it", dtype="auto", device_map="auto")
125
+ ```
126
+
127
+ Define the function definition using JSON schema, then set a system instruction using the developer role. This is required to let the model know it should use the function(s) provided. Add a user query as input to the model and then generate the output. The model will then generate one or more function calls that it wants the developer to make on its behalf.
128
+
129
+ ```python
130
+ weather_function_schema = {
131
+ "type": "function",
132
+ "function": {
133
+ "name": "get_current_temperature",
134
+ "description": "Gets the current temperature for a given location.",
135
+ "parameters": {
136
+ "type": "object",
137
+ "properties": {
138
+ "location": {
139
+ "type": "string",
140
+ "description": "The city name, e.g. San Francisco",
141
+ },
142
+ },
143
+ "required": ["location"],
144
+ },
145
+ }
146
+ }
147
+
148
+ message = [
149
+ # ESSENTIAL SYSTEM PROMPT:
150
+ # This line activates the model's function calling logic.
151
+ {
152
+ "role": "developer",
153
+ "content": "You are a model that can do function calling with the following functions"
154
+ },
155
+ {
156
+ "role": "user",
157
+ "content": "What's the temperature in London?"
158
+ }
159
+ ]
160
+
161
+ inputs = processor.apply_chat_template(message, tools=[weather_function_schema], add_generation_prompt=True, return_dict=True, return_tensors="pt")
162
+
163
+ out = model.generate(**inputs.to(model.device), pad_token_id=processor.eos_token_id, max_new_tokens=128)
164
+ output = processor.decode(out[0][len(inputs["input_ids"][0]):], skip_special_tokens=True)
165
+
166
+ print(output)
167
+ # <start_function_call>call:get_current_temperature{location:<escape>London<escape>}<end_function_call>
168
+ ```
169
+
170
+ For more detailed examples see the [Gemma documentation](https://ai.google.dev/gemma/docs/functiongemma).
171
+
172
+ ## Model Data
173
+
174
+ Data used for model training and how the data was processed.
175
+
176
+ ### Training Dataset
177
+
178
+ These models were trained on a dataset of text data that includes a wide
179
+ variety of sources. The model was trained with 6T tokens. The knowledge cutoff
180
+ date for the training data was August 2024. There are the key components:
181
+
182
+ - Public Tool Definitions - Common APIs found on the web
183
+ - Tool Use Interactions - These are a mix of prompts, function calls,
184
+ function responses, and natural language responses from the model to
185
+ summarise the function call response, or request clarifications when the
186
+ prompt is ambiguous or incomplete.
187
+
188
+ ### Data Preprocessing
189
+
190
+ Here are the key data cleaning and filtering methods applied to the training
191
+ data:
192
+
193
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
194
+ was applied at multiple stages in the data preparation process to ensure
195
+ the exclusion of harmful and illegal content.
196
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
197
+ safe and reliable, automated techniques were used to filter out certain
198
+ personal information and other sensitive data from training sets.
199
+ - Additional methods: Filtering based on content quality and safety in
200
+ line with
201
+ [our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf).
202
+
203
+ ## Implementation Information
204
+
205
+ Details about the model internals.
206
+
207
+ ### Hardware
208
+
209
+ Gemma was trained using [Tensor Processing Unit
210
+ (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv4p, TPUv5p
211
+ and TPUv5e). Training vision-language models (VLMs) requires significant
212
+ computational power. TPUs, designed specifically for matrix operations common in
213
+ machine learning, offer several advantages in this domain:
214
+
215
+ - Performance: TPUs are specifically designed to handle the massive
216
+ computations involved in training VLMs. They can speed up training
217
+ considerably compared to CPUs.
218
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
219
+ allowing for the handling of large models and batch sizes during training.
220
+ This can lead to better model quality.
221
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
222
+ solution for handling the growing complexity of large foundation models.
223
+ You can distribute training across multiple TPU devices for faster and more
224
+ efficient processing.
225
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
226
+ cost-effective solution for training large models compared to CPU-based
227
+ infrastructure, especially when considering the time and resources saved
228
+ due to faster training.
229
+ - These advantages are aligned with
230
+ [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
231
+
232
+ ### Software
233
+
234
+ Training was done using [JAX](https://github.com/jax-ml/jax) and
235
+ [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
236
+ JAX allows researchers to take advantage of the latest generation of hardware,
237
+ including TPUs, for faster and more efficient training of large models. ML
238
+ Pathways is Google's latest effort to build artificially intelligent systems
239
+ capable of generalizing across multiple tasks. This is specially suitable for
240
+ foundation models, including large language models like these ones.\
241
+ Together, JAX and ML Pathways are used as described in the [paper about the
242
+ Gemini family of models](https://goo.gle/gemma2report); *"the 'single
243
+ controller' programming model of Jax and Pathways allows a single Python process
244
+ to orchestrate the entire training run, dramatically simplifying the development
245
+ workflow."*
246
+
247
+ ## Evaluation
248
+
249
+ Model evaluation metrics and results.
250
+
251
+ ### Benchmark Results
252
+
253
+ <table>
254
+ <thead>
255
+ <tr>
256
+ <th><strong>Benchmark</strong></th>
257
+ <th><strong>n-shot</strong></th>
258
+ <th><strong>Function Gemma 270m</strong></th>
259
+ </tr>
260
+ </thead>
261
+ <tbody>
262
+ <tr>
263
+ <td>BFCL Simple</td>
264
+ <td>0-shot</td>
265
+ <td>61.6</td>
266
+ </tr>
267
+ <tr>
268
+ <td>BFCL Parallel</td>
269
+ <td>0-shot</td>
270
+ <td>63.5</td>
271
+ </tr>
272
+ <tr>
273
+ <td>BFCL Multiple</td>
274
+ <td>0-shot</td>
275
+ <td>39</td>
276
+ </tr>
277
+ <tr>
278
+ <td>BFCL Parallel Multiple</td>
279
+ <td>0-shot</td>
280
+ <td>29.5</td>
281
+ </tr>
282
+ <tr>
283
+ <td>BFCL Live Simple </td>
284
+ <td>0-shot</td>
285
+ <td>36.2</td>
286
+ </tr>
287
+ <tr>
288
+ <td>BFCL Live Parallel</td>
289
+ <td>0-shot</td>
290
+ <td>25.7</td>
291
+ </tr>
292
+ <tr>
293
+ <td>BFCL Live Multiple</td>
294
+ <td>0-shot</td>
295
+ <td>22.9</td>
296
+ </tr>
297
+ <tr>
298
+ <td>BFCL Live Parallel Multiple</td>
299
+ <td>0-shot</td>
300
+ <td>20.8</td>
301
+ </tr>
302
+ <tr>
303
+ <td>BFCL Relevance</td>
304
+ <td>0-shot</td>
305
+ <td>61.1</td>
306
+ </tr>
307
+ <tr>
308
+ <td>BFCL Irrelevance</td>
309
+ <td>0-shot</td>
310
+ <td>70.6</td>
311
+ </tr>
312
+ </tbody>
313
+ </table>
314
+
315
+ **Impact on Performance after Fine-tuning on Mobile Actions Dataset**\
316
+ To demonstrate the value of specialization for small language models, we
317
+ compared the base FunctionGemma model against the fine-tuned model using the
318
+ "Mobile Actions"
319
+ [recipe](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb).
320
+ Fine-tuning significantly improved the base FunctionGemma model's ability to
321
+ correctly identify and format mobile system calls.
322
+
323
+ <table>
324
+ <thead>
325
+ <tr>
326
+ <th><br>
327
+ Model</th>
328
+ <th><br>
329
+ Eval results for Mobile Actions</th>
330
+ </tr>
331
+ </thead>
332
+ <tbody>
333
+ <tr>
334
+ <td><br>
335
+ Base FunctionGemma model</td>
336
+ <td><br>
337
+ 58%</td>
338
+ </tr>
339
+ <tr>
340
+ <td><br>
341
+ Mobile Actions Fine-Tune</td>
342
+ <td><br>
343
+ 85%</td>
344
+ </tr>
345
+ </tbody>
346
+ </table>
347
+
348
+ **On-Device Performance of the Gemma 270m Fine-tuned Use Cases**\
349
+ We evaluated the fine-tuned use cases on a Samsung S25 Ultra to assess on-device
350
+ latency and memory footprint.
351
+
352
+ - **Context:** 512 prefill tokens and 32 decode tokens.
353
+ - **Hardware:** S25 Ultra CPU using LiteRT XNNPACK delegate with 4 threads.
354
+
355
+ Mobile Actions On Device Performance
356
+
357
+ <table>
358
+ <thead>
359
+ <tr>
360
+ <th><br>
361
+ Backend</th>
362
+ <th><br>
363
+ Quantization scheme</th>
364
+ <th><br>
365
+ Context length</th>
366
+ <th><br>
367
+ Prefill (tokens per second)</th>
368
+ <th><br>
369
+ Decode (tokens per second)</th>
370
+ <th><br>
371
+ Time-to-first-token (seconds)</th>
372
+ <th><br>
373
+ Model Size (MB)</th>
374
+ <th><br>
375
+ Peak RSS Memory (MB)</th>
376
+ </tr>
377
+ </thead>
378
+ <tbody>
379
+ <tr>
380
+ <td><br>
381
+ CPU</td>
382
+ <td><br>
383
+ dynamic_int8</td>
384
+ <td><br>
385
+ 1024</td>
386
+ <td><br>
387
+ 1718</td>
388
+ <td><br>
389
+ 125.9</td>
390
+ <td><br>
391
+ 0.3</td>
392
+ <td><br>
393
+ 288</td>
394
+ <td><br>
395
+ 551</td>
396
+ </tr>
397
+ </tbody>
398
+ </table>
399
+
400
+ Tiny Garden On Device Performance
401
+
402
+ <table>
403
+ <thead>
404
+ <tr>
405
+ <th><br>
406
+ Backend</th>
407
+ <th><br>
408
+ Quantization scheme</th>
409
+ <th><br>
410
+ Context length</th>
411
+ <th><br>
412
+ Prefill (tokens per second)</th>
413
+ <th><br>
414
+ Decode (tokens per second)</th>
415
+ <th><br>
416
+ Time-to-first-token (seconds)</th>
417
+ <th><br>
418
+ Model Size (MB)</th>
419
+ <th><br>
420
+ Peak RSS Memory (MB)</th>
421
+ </tr>
422
+ </thead>
423
+ <tbody>
424
+ <tr>
425
+ <td><br>
426
+ CPU</td>
427
+ <td><br>
428
+ dynamic_int8</td>
429
+ <td><br>
430
+ 1024</td>
431
+ <td><br>
432
+ 1743</td>
433
+ <td><br>
434
+ 125.7</td>
435
+ <td><br>
436
+ 0.3</td>
437
+ <td><br>
438
+ 288</td>
439
+ <td><br>
440
+ 549</td>
441
+ </tr>
442
+ </tbody>
443
+ </table>
444
+
445
+ ## Ethics and Safety
446
+
447
+ Ethics and safety evaluation approach and results.
448
+
449
+ ### Evaluation Approach
450
+
451
+ Our evaluation methods include structured evaluations and internal red-teaming
452
+ testing of relevant content policies. Red-teaming was conducted by a number of
453
+ different teams, each with different goals and human evaluation metrics. These
454
+ models were evaluated against a number of different categories relevant to
455
+ ethics and safety, including:
456
+
457
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
458
+ covering child safety policies, including child sexual abuse and exploitation.
459
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
460
+ covering safety policies including, harassment, violence and gore, and hate
461
+ speech.
462
+ - **Representational Harms**: Evaluation of text-to-text and image to text
463
+ prompts covering safety policies including bias, stereotyping, and harmful
464
+ associations or inaccuracies.
465
+
466
+ ### Evaluation Results
467
+
468
+ For all areas of safety testing, we saw major improvements in the categories of
469
+ child safety, content safety, and representational harms relative to previous
470
+ Gemma models. All testing was conducted without safety filters to evaluate the
471
+ model capabilities and behaviors. The model produced minimal policy violations,
472
+ and showed significant improvements over previous Gemma models' performance
473
+ with respect to ungrounded inferences. A limitation of our evaluations was they
474
+ included only English language prompts.
475
+
476
+ ## Usage and Limitations
477
+
478
+ These models have certain limitations that users should be aware of.
479
+
480
+ ### Intended Usage
481
+
482
+ This model is not intended for use as a direct dialogue model.\
483
+ Open Large Language Models (LLMs) have a wide range of applications across
484
+ various industries and domains. The following list of potential uses is not
485
+ comprehensive. The purpose of this list is to provide contextual information
486
+ about the possible use-cases that the model creators considered as part of model
487
+ training and development.
488
+
489
+ - Content Creation and Communication
490
+ - Text Generation: These models can be used to generate creative
491
+ text formats such as poems, scripts, code, marketing copy, and email drafts.
492
+ - Chatbots and Conversational AI: Power conversational interfaces
493
+ for customer service, virtual assistants, or interactive applications.
494
+ - Text Summarization: Generate concise summaries of a text corpus,
495
+ research papers, or reports.
496
+ - Research and Education
497
+ - Natural Language Processing (NLP) Research: These models can
498
+ serve as a foundation for researchers to experiment with NLP
499
+ techniques, develop algorithms, and contribute to the advancement of the field.
500
+ - Language Learning Tools: Support interactive language learning
501
+ experiences, aiding in grammar correction or providing writing practice.
502
+ - Knowledge Exploration: Assist researchers in exploring large
503
+ bodies of text by generating summaries or answering questions about
504
+ specific topics.
505
+
506
+ ### Limitations
507
+
508
+ - Training Data
509
+ - The quality and diversity of the training data significantly
510
+ influence the model's capabilities. Biases or gaps in the training data
511
+ can lead to limitations in the model's responses.
512
+ - The scope of the training dataset determines the subject areas
513
+ the model can handle effectively.
514
+ - Context and Task Complexity
515
+ - Models are better at tasks that can be framed with clear
516
+ prompts and instructions. Open-ended or highly complex tasks might be
517
+ challenging.
518
+ - A model's performance can be influenced by the amount of context
519
+ provided (longer context generally leads to better outputs, up to a
520
+ certain point).
521
+ - Language Ambiguity and Nuance
522
+ - Natural language is inherently complex. Models might struggle
523
+ to grasp subtle nuances, sarcasm, or figurative language.
524
+ - Factual Accuracy
525
+ - Models generate responses based on information they learned
526
+ from their training datasets, but they are not knowledge bases. They
527
+ may generate incorrect or outdated factual statements.
528
+ - Common Sense
529
+ - Models rely on statistical patterns in language. They might
530
+ lack the ability to apply common sense reasoning in certain situations.
531
+
532
+ ### Ethical Considerations and Risks
533
+
534
+ The development of large language models (LLMs) raises several ethical
535
+ concerns. In creating an open model, we have carefully considered the
536
+ following:
537
+
538
+ - Bias and Fairness
539
+ - LLMs trained on large-scale, real-world text data can reflect
540
+ socio-cultural biases embedded in the training material. These models
541
+ underwent careful scrutiny, input data pre-processing described and
542
+ posterior evaluations reported in this card.
543
+ - Misinformation and Misuse
544
+ - LLMs can be misused to generate text that is false, misleading,
545
+ or harmful.
546
+ - Guidelines are provided for responsible use with the model, see
547
+ the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
548
+ - Transparency and Accountability:
549
+ - This model card summarizes details on the models' architecture,
550
+ capabilities, limitations, and evaluation processes.
551
+ - A responsibly developed open model offers the opportunity to
552
+ share innovation by making LLM technology accessible to developers and
553
+ researchers across the AI ecosystem.
554
+
555
+ Risks identified and mitigations:
556
+
557
+ - Perpetuation of biases: It's encouraged to perform continuous
558
+ monitoring (using evaluation metrics, human review) and the exploration of
559
+ de-biasing techniques during model training, fine-tuning, and other use cases.
560
+ - Generation of harmful content: Mechanisms and guidelines for content
561
+ safety are essential. Developers are encouraged to exercise caution and
562
+ implement appropriate content safety safeguards based on their specific
563
+ product policies and application use cases.
564
+ - Misuse for malicious purposes: Technical limitations and developer and
565
+ end-user education can help mitigate against malicious applications of
566
+ LLMs. Educational resources and reporting mechanisms for users to flag
567
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
568
+ [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy)..
569
+ - Privacy violations: Models were trained on data filtered for removal of
570
+ PII (Personally Identifiable Information). Developers are encouraged to
571
+ adhere to privacy regulations with privacy-preserving techniques.
572
+
573
+ ### Benefits
574
+
575
+ At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.