| | --- |
| | license: gemma |
| | library_name: transformers |
| | pipeline_tag: text-generation |
| | tags: |
| | - conversational |
| | base_model: google/gemma-2-9b |
| | --- |
| | |
| |
|
| | # Gemma 2 model card |
| | <h1 style="display: flex; align-items: center; gap: 10px; margin: 0;"> |
| | gemma-2-9b-it |
| | <img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" /> |
| | </h1> |
| | |
| | <a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;"> |
| | <img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" /> |
| | </a> |
| |
|
| |
|
| | **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) |
| |
|
| | **Resources and Technical Documentation**: |
| |
|
| | * [Responsible Generative AI Toolkit][rai-toolkit] |
| | * [Gemma on Kaggle][kaggle-gemma] |
| | * [Gemma on Vertex Model Garden][vertex-mg-gemma] |
| |
|
| | **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it) |
| |
|
| | **Authors**: Google |
| |
|
| | ## Model Information |
| |
|
| | Summary description and brief definition of inputs and outputs. |
| |
|
| | ### Description |
| |
|
| | Gemma is a family of lightweight, state-of-the-art open models from Google, |
| | built from the same research and technology used to create the Gemini models. |
| | They are text-to-text, decoder-only large language models, available in English, |
| | with open weights for both pre-trained variants and instruction-tuned variants. |
| | Gemma models are well-suited for a variety of text generation tasks, including |
| | question answering, summarization, and reasoning. Their relatively small size |
| | makes it possible to deploy them in environments with limited resources such as |
| | a laptop, desktop or your own cloud infrastructure, democratizing access to |
| | state of the art AI models and helping foster innovation for everyone. |
| |
|
| | ### Usage |
| |
|
| | Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: |
| | ```sh |
| | pip install -U transformers |
| | ``` |
| |
|
| | Then, copy the snippet from the section that is relevant for your usecase. |
| |
|
| | ## Deployment |
| |
|
| | This model can be deployed efficiently on vLLM, Red Hat Enterprise Linux AI, and Openshift AI, as shown in the example below. |
| |
|
| | Deploy on <strong>vLLM</strong> |
| |
|
| | ```python |
| | from vllm import LLM, SamplingParams |
| | from transformers import AutoTokenizer |
| | model_id = "RedHatAI/gemma-2-9b-it" |
| | number_gpus = 4 |
| | sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256) |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | prompt = "Give me a short introduction to large language model." |
| | llm = LLM(model=model_id, tensor_parallel_size=number_gpus) |
| | outputs = llm.generate(prompt, sampling_params) |
| | generated_text = outputs[0].outputs[0].text |
| | print(generated_text) |
| | ``` |
| |
|
| | vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. |
| |
|
| | <details> |
| | <summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary> |
| | |
| | ```bash |
| | $ podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \ |
| | --ipc=host \ |
| | --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ |
| | --env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \ |
| | --name=vllm \ |
| | registry.access.redhat.com/rhaiis/rh-vllm-cuda \ |
| | vllm serve \ |
| | --tensor-parallel-size 8 \ |
| | --max-model-len 32768 \ |
| | --enforce-eager --model RedHatAI/gemma-2-9b-it |
| | ``` |
| | See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details. |
| | </details> |
| |
|
| | <details> |
| | <summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary> |
| | |
| | ```bash |
| | # Download model from Red Hat Registry via docker |
| | # Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified. |
| | ilab model download --repository docker://registry.redhat.io/rhelai1/gemma-2-9b-it:1.5 |
| | ``` |
| |
|
| | ```bash |
| | # Serve model via ilab |
| | ilab model serve --model-path ~/.cache/instructlab/models/gemma-2-9b-it |
| | |
| | # Chat with model |
| | ilab model chat --model ~/.cache/instructlab/models/gemma-2-9b-it |
| | ``` |
| | See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details. |
| | </details> |
| |
|
| | <details> |
| | <summary>Deploy on <strong>Red Hat Openshift AI</strong></summary> |
| | |
| | ```python |
| | # Setting up vllm server with ServingRuntime |
| | # Save as: vllm-servingruntime.yaml |
| | apiVersion: serving.kserve.io/v1alpha1 |
| | kind: ServingRuntime |
| | metadata: |
| | name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name |
| | annotations: |
| | openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe |
| | opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]' |
| | labels: |
| | opendatahub.io/dashboard: 'true' |
| | spec: |
| | annotations: |
| | prometheus.io/port: '8080' |
| | prometheus.io/path: '/metrics' |
| | multiModel: false |
| | supportedModelFormats: |
| | - autoSelect: true |
| | name: vLLM |
| | containers: |
| | - name: kserve-container |
| | image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm |
| | command: |
| | - python |
| | - -m |
| | - vllm.entrypoints.openai.api_server |
| | args: |
| | - "--port=8080" |
| | - "--model=/mnt/models" |
| | - "--served-model-name={{.Name}}" |
| | env: |
| | - name: HF_HOME |
| | value: /tmp/hf_home |
| | ports: |
| | - containerPort: 8080 |
| | protocol: TCP |
| | ``` |
| |
|
| | ```python |
| | # Attach model to vllm server. This is an NVIDIA template |
| | # Save as: inferenceservice.yaml |
| | apiVersion: serving.kserve.io/v1beta1 |
| | kind: InferenceService |
| | metadata: |
| | annotations: |
| | openshift.io/display-name: gemma-2-9b-it # OPTIONAL CHANGE |
| | serving.kserve.io/deploymentMode: RawDeployment |
| | name: gemma-2-9b-it # specify model name. This value will be used to invoke the model in the payload |
| | labels: |
| | opendatahub.io/dashboard: 'true' |
| | spec: |
| | predictor: |
| | maxReplicas: 1 |
| | minReplicas: 1 |
| | model: |
| | modelFormat: |
| | name: vLLM |
| | name: '' |
| | resources: |
| | limits: |
| | cpu: '2' # this is model specific |
| | memory: 8Gi # this is model specific |
| | nvidia.com/gpu: '1' # this is accelerator specific |
| | requests: # same comment for this block |
| | cpu: '1' |
| | memory: 4Gi |
| | nvidia.com/gpu: '1' |
| | runtime: vllm-cuda-runtime # must match the ServingRuntime name above |
| | storageUri: oci://registry.redhat.io/rhelai1/modelcar-gemma-2-9b-it:1.5 |
| | tolerations: |
| | - effect: NoSchedule |
| | key: nvidia.com/gpu |
| | operator: Exists |
| | ``` |
| |
|
| | ```bash |
| | # make sure first to be in the project where you want to deploy the model |
| | # oc project <project-name> |
| | # apply both resources to run model |
| | # Apply the ServingRuntime |
| | oc apply -f vllm-servingruntime.yaml |
| | # Apply the InferenceService |
| | oc apply -f qwen-inferenceservice.yaml |
| | ``` |
| |
|
| | ```python |
| | # Replace <inference-service-name> and <cluster-ingress-domain> below: |
| | # - Run `oc get inferenceservice` to find your URL if unsure. |
| | # Call the server using curl: |
| | curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions |
| | -H "Content-Type: application/json" \ |
| | -d '{ |
| | "model": "gemma-2-9b-it", |
| | "stream": true, |
| | "stream_options": { |
| | "include_usage": true |
| | }, |
| | "max_tokens": 1, |
| | "messages": [ |
| | { |
| | "role": "user", |
| | "content": "How can a bee fly when its wings are so small?" |
| | } |
| | ] |
| | }' |
| | ``` |
| |
|
| | See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details. |
| | </details> |
| |
|
| |
|
| |
|
| | #### Running with the `pipeline` API |
| |
|
| | ```python |
| | import torch |
| | from transformers import pipeline |
| | |
| | pipe = pipeline( |
| | "text-generation", |
| | model="google/gemma-2-9b-it", |
| | model_kwargs={"torch_dtype": torch.bfloat16}, |
| | device="cuda", # replace with "mps" to run on a Mac device |
| | ) |
| | |
| | messages = [ |
| | {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, |
| | ] |
| | |
| | outputs = pipe(messages, max_new_tokens=256) |
| | assistant_response = outputs[0]["generated_text"][-1]["content"].strip() |
| | print(assistant_response) |
| | # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜 |
| | ``` |
| |
|
| | #### Running the model on a single / multi GPU |
| |
|
| | ```python |
| | # pip install accelerate |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | import torch |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "google/gemma-2-9b-it", |
| | device_map="auto", |
| | torch_dtype=torch.bfloat16, |
| | ) |
| | |
| | input_text = "Write me a poem about Machine Learning." |
| | input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
| | |
| | outputs = model.generate(**input_ids, max_new_tokens=32) |
| | print(tokenizer.decode(outputs[0])) |
| | ``` |
| |
|
| | You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: |
| | ```python |
| | messages = [ |
| | {"role": "user", "content": "Write me a poem about Machine Learning."}, |
| | ] |
| | input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") |
| | |
| | outputs = model.generate(**input_ids, max_new_tokens=256) |
| | print(tokenizer.decode(outputs[0])) |
| | ``` |
| |
|
| | <a name="precisions"></a> |
| | #### Running the model on a GPU using different precisions |
| |
|
| | The native weights of this model were exported in `bfloat16` precision. |
| |
|
| | You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. |
| |
|
| | * _Upcasting to `torch.float32`_ |
| |
|
| | ```python |
| | # pip install accelerate |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "google/gemma-2-9b-it", |
| | device_map="auto", |
| | ) |
| | |
| | input_text = "Write me a poem about Machine Learning." |
| | input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
| | |
| | outputs = model.generate(**input_ids, max_new_tokens=32) |
| | print(tokenizer.decode(outputs[0])) |
| | ``` |
| |
|
| | #### Running the model through a CLI |
| |
|
| | The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers |
| | for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) |
| | for getting started, then launch the CLI through the following command: |
| |
|
| | ```shell |
| | local-gemma --model 9b --preset speed |
| | ``` |
| |
|
| | #### Quantized Versions through `bitsandbytes` |
| |
|
| | <details> |
| | <summary> |
| | Using 8-bit precision (int8) |
| | </summary> |
| | |
| | ```python |
| | # pip install bitsandbytes accelerate |
| | from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
| | |
| | quantization_config = BitsAndBytesConfig(load_in_8bit=True) |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "google/gemma-2-9b-it", |
| | quantization_config=quantization_config, |
| | ) |
| | |
| | input_text = "Write me a poem about Machine Learning." |
| | input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
| | |
| | outputs = model.generate(**input_ids, max_new_tokens=32) |
| | print(tokenizer.decode(outputs[0])) |
| | ``` |
| | </details> |
| |
|
| | <details> |
| | <summary> |
| | Using 4-bit precision |
| | </summary> |
| | |
| | ```python |
| | # pip install bitsandbytes accelerate |
| | from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
| | |
| | quantization_config = BitsAndBytesConfig(load_in_4bit=True) |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "google/gemma-2-9b-it", |
| | quantization_config=quantization_config, |
| | ) |
| | |
| | input_text = "Write me a poem about Machine Learning." |
| | input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
| | |
| | outputs = model.generate(**input_ids, max_new_tokens=32) |
| | print(tokenizer.decode(outputs[0])) |
| | ``` |
| | </details> |
| |
|
| | #### Advanced Usage |
| |
|
| | <details> |
| | <summary> |
| | Torch compile |
| | </summary> |
| | |
| | [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the |
| | inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile. |
| |
|
| | Note that two warm-up steps are required before the full inference speed is realised: |
| |
|
| | ```python |
| | import os |
| | os.environ["TOKENIZERS_PARALLELISM"] = "false" |
| | |
| | from transformers import AutoTokenizer, Gemma2ForCausalLM |
| | from transformers.cache_utils import HybridCache |
| | import torch |
| | |
| | torch.set_float32_matmul_precision("high") |
| | |
| | # load the model + tokenizer |
| | tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") |
| | model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16) |
| | model.to("cuda") |
| | |
| | # apply the torch compile transformation |
| | model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) |
| | |
| | # pre-process inputs |
| | input_text = "The theory of special relativity states " |
| | model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") |
| | prompt_length = model_inputs.input_ids.shape[1] |
| | |
| | # set-up k/v cache |
| | past_key_values = HybridCache( |
| | config=model.config, |
| | max_batch_size=1, |
| | max_cache_len=model.config.max_position_embeddings, |
| | device=model.device, |
| | dtype=model.dtype |
| | ) |
| | |
| | # enable passing kv cache to generate |
| | model._supports_cache_class = True |
| | model.generation_config.cache_implementation = None |
| | |
| | # two warm-up steps |
| | for idx in range(2): |
| | outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) |
| | past_key_values.reset() |
| | |
| | # fast run |
| | outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | ``` |
| |
|
| | For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). |
| |
|
| | </details> |
| |
|
| | ### Chat Template |
| |
|
| | The instruction-tuned models use a chat template that must be adhered to for conversational use. |
| | The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. |
| |
|
| | Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: |
| |
|
| | ```py |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | import transformers |
| | import torch |
| | |
| | model_id = "google/gemma-2-9b-it" |
| | dtype = torch.bfloat16 |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained( |
| | model_id, |
| | device_map="cuda", |
| | torch_dtype=dtype,) |
| | |
| | chat = [ |
| | { "role": "user", "content": "Write a hello world program" }, |
| | ] |
| | prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) |
| | ``` |
| |
|
| | At this point, the prompt contains the following text: |
| |
|
| | ``` |
| | <bos><start_of_turn>user |
| | Write a hello world program<end_of_turn> |
| | <start_of_turn>model |
| | ``` |
| |
|
| | As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity |
| | (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with |
| | the `<end_of_turn>` token. |
| |
|
| | You can follow this format to build the prompt manually, if you need to do it without the tokenizer's |
| | chat template. |
| |
|
| | After the prompt is ready, generation can be performed like this: |
| |
|
| | ```py |
| | inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") |
| | outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) |
| | print(tokenizer.decode(outputs[0])) |
| | ``` |
| |
|
| | ### Inputs and outputs |
| |
|
| | * **Input:** Text string, such as a question, a prompt, or a document to be |
| | summarized. |
| | * **Output:** Generated English-language text in response to the input, such |
| | as an answer to a question, or a summary of a document. |
| | |
| | ### Citation |
| |
|
| | ```none |
| | @article{gemma_2024, |
| | title={Gemma}, |
| | url={https://www.kaggle.com/m/3301}, |
| | DOI={10.34740/KAGGLE/M/3301}, |
| | publisher={Kaggle}, |
| | author={Gemma Team}, |
| | year={2024} |
| | } |
| | ``` |
| |
|
| | ## Model Data |
| |
|
| | Data used for model training and how the data was processed. |
| |
|
| | ### Training Dataset |
| |
|
| | These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. |
| | Here are the key components: |
| |
|
| | * Web Documents: A diverse collection of web text ensures the model is exposed |
| | to a broad range of linguistic styles, topics, and vocabulary. Primarily |
| | English-language content. |
| | * Code: Exposing the model to code helps it to learn the syntax and patterns of |
| | programming languages, which improves its ability to generate code or |
| | understand code-related questions. |
| | * Mathematics: Training on mathematical text helps the model learn logical |
| | reasoning, symbolic representation, and to address mathematical queries. |
| |
|
| | The combination of these diverse data sources is crucial for training a powerful |
| | language model that can handle a wide variety of different tasks and text |
| | formats. |
| |
|
| | ### Data Preprocessing |
| |
|
| | Here are the key data cleaning and filtering methods applied to the training |
| | data: |
| |
|
| | * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was |
| | applied at multiple stages in the data preparation process to ensure the |
| | exclusion of harmful and illegal content. |
| | * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and |
| | reliable, automated techniques were used to filter out certain personal |
| | information and other sensitive data from training sets. |
| | * Additional methods: Filtering based on content quality and safety in line with |
| | [our policies][safety-policies]. |
| |
|
| | ## Implementation Information |
| |
|
| | Details about the model internals. |
| |
|
| | ### Hardware |
| |
|
| | Gemma was trained using the latest generation of |
| | [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). |
| |
|
| | Training large language models requires significant computational power. TPUs, |
| | designed specifically for matrix operations common in machine learning, offer |
| | several advantages in this domain: |
| |
|
| | * Performance: TPUs are specifically designed to handle the massive computations |
| | involved in training LLMs. They can speed up training considerably compared to |
| | CPUs. |
| | * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing |
| | for the handling of large models and batch sizes during training. This can |
| | lead to better model quality. |
| | * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for |
| | handling the growing complexity of large foundation models. You can distribute |
| | training across multiple TPU devices for faster and more efficient processing. |
| | * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective |
| | solution for training large models compared to CPU-based infrastructure, |
| | especially when considering the time and resources saved due to faster |
| | training. |
| | * These advantages are aligned with |
| | [Google's commitments to operate sustainably][sustainability]. |
| |
|
| | ### Software |
| |
|
| | Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. |
| |
|
| | JAX allows researchers to take advantage of the latest generation of hardware, |
| | including TPUs, for faster and more efficient training of large models. |
| |
|
| | ML Pathways is Google's latest effort to build artificially intelligent systems |
| | capable of generalizing across multiple tasks. This is specially suitable for |
| | [foundation models][foundation-models], including large language models like |
| | these ones. |
| |
|
| | Together, JAX and ML Pathways are used as described in the |
| | [paper about the Gemini family of models][gemini-2-paper]; "the 'single |
| | controller' programming model of Jax and Pathways allows a single Python |
| | process to orchestrate the entire training run, dramatically simplifying the |
| | development workflow." |
| |
|
| | ## Evaluation |
| |
|
| | Model evaluation metrics and results. |
| |
|
| | ### Benchmark Results |
| |
|
| | These models were evaluated against a large collection of different datasets and |
| | metrics to cover different aspects of text generation: |
| |
|
| | | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | |
| | | ------------------------------ | ------------- | ----------- | ------------ | |
| | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | |
| | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | |
| | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | |
| | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | |
| | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | |
| | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | |
| | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | |
| | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | |
| | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | |
| | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | |
| | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | |
| | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | |
| | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | |
| | | [MATH][math] | 4-shot | 36.6 | 42.3 | |
| | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | |
| | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | |
| | | ------------------------------ | ------------- | ----------- | ------------ | |
| |
|
| | ## Ethics and Safety |
| |
|
| | Ethics and safety evaluation approach and results. |
| |
|
| | ### Evaluation Approach |
| |
|
| | Our evaluation methods include structured evaluations and internal red-teaming |
| | testing of relevant content policies. Red-teaming was conducted by a number of |
| | different teams, each with different goals and human evaluation metrics. These |
| | models were evaluated against a number of different categories relevant to |
| | ethics and safety, including: |
| |
|
| | * Text-to-Text Content Safety: Human evaluation on prompts covering safety |
| | policies including child sexual abuse and exploitation, harassment, violence |
| | and gore, and hate speech. |
| | * Text-to-Text Representational Harms: Benchmark against relevant academic |
| | datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. |
| | * Memorization: Automated evaluation of memorization of training data, including |
| | the risk of personally identifiable information exposure. |
| | * Large-scale harm: Tests for "dangerous capabilities," such as chemical, |
| | biological, radiological, and nuclear (CBRN) risks. |
| |
|
| | ### Evaluation Results |
| |
|
| | The results of ethics and safety evaluations are within acceptable thresholds |
| | for meeting [internal policies][safety-policies] for categories such as child |
| | safety, content safety, representational harms, memorization, large-scale harms. |
| | On top of robust internal evaluations, the results of well-known safety |
| | benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA |
| | are shown here. |
| |
|
| | #### Gemma 2.0 |
| |
|
| | | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | |
| | | ------------------------ | ------------- | --------------- | ---------------- | |
| | | [RealToxicity][realtox] | average | 8.25 | 8.84 | |
| | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | |
| | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | |
| | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | |
| | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | |
| | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | |
| | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | |
| | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | |
| | | [Toxigen][toxigen] | | 39.30 | 38.42 | |
| | | ------------------------ | ------------- | --------------- | ---------------- | |
| |
|
| | ## Usage and Limitations |
| |
|
| | These models have certain limitations that users should be aware of. |
| |
|
| | ### Intended Usage |
| |
|
| | Open Large Language Models (LLMs) have a wide range of applications across |
| | various industries and domains. The following list of potential uses is not |
| | comprehensive. The purpose of this list is to provide contextual information |
| | about the possible use-cases that the model creators considered as part of model |
| | training and development. |
| |
|
| | * Content Creation and Communication |
| | * Text Generation: These models can be used to generate creative text formats |
| | such as poems, scripts, code, marketing copy, and email drafts. |
| | * Chatbots and Conversational AI: Power conversational interfaces for customer |
| | service, virtual assistants, or interactive applications. |
| | * Text Summarization: Generate concise summaries of a text corpus, research |
| | papers, or reports. |
| | * Research and Education |
| | * Natural Language Processing (NLP) Research: These models can serve as a |
| | foundation for researchers to experiment with NLP techniques, develop |
| | algorithms, and contribute to the advancement of the field. |
| | * Language Learning Tools: Support interactive language learning experiences, |
| | aiding in grammar correction or providing writing practice. |
| | * Knowledge Exploration: Assist researchers in exploring large bodies of text |
| | by generating summaries or answering questions about specific topics. |
| | |
| | ### Limitations |
| |
|
| | * Training Data |
| | * The quality and diversity of the training data significantly influence the |
| | model's capabilities. Biases or gaps in the training data can lead to |
| | limitations in the model's responses. |
| | * The scope of the training dataset determines the subject areas the model can |
| | handle effectively. |
| | * Context and Task Complexity |
| | * LLMs are better at tasks that can be framed with clear prompts and |
| | instructions. Open-ended or highly complex tasks might be challenging. |
| | * A model's performance can be influenced by the amount of context provided |
| | (longer context generally leads to better outputs, up to a certain point). |
| | * Language Ambiguity and Nuance |
| | * Natural language is inherently complex. LLMs might struggle to grasp subtle |
| | nuances, sarcasm, or figurative language. |
| | * Factual Accuracy |
| | * LLMs generate responses based on information they learned from their |
| | training datasets, but they are not knowledge bases. They may generate |
| | incorrect or outdated factual statements. |
| | * Common Sense |
| | * LLMs rely on statistical patterns in language. They might lack the ability |
| | to apply common sense reasoning in certain situations. |
| | |
| | ### Ethical Considerations and Risks |
| |
|
| | The development of large language models (LLMs) raises several ethical concerns. |
| | In creating an open model, we have carefully considered the following: |
| |
|
| | * Bias and Fairness |
| | * LLMs trained on large-scale, real-world text data can reflect socio-cultural |
| | biases embedded in the training material. These models underwent careful |
| | scrutiny, input data pre-processing described and posterior evaluations |
| | reported in this card. |
| | * Misinformation and Misuse |
| | * LLMs can be misused to generate text that is false, misleading, or harmful. |
| | * Guidelines are provided for responsible use with the model, see the |
| | [Responsible Generative AI Toolkit][rai-toolkit]. |
| | * Transparency and Accountability: |
| | * This model card summarizes details on the models' architecture, |
| | capabilities, limitations, and evaluation processes. |
| | * A responsibly developed open model offers the opportunity to share |
| | innovation by making LLM technology accessible to developers and researchers |
| | across the AI ecosystem. |
| | |
| | Risks identified and mitigations: |
| |
|
| | * Perpetuation of biases: It's encouraged to perform continuous monitoring |
| | (using evaluation metrics, human review) and the exploration of de-biasing |
| | techniques during model training, fine-tuning, and other use cases. |
| | * Generation of harmful content: Mechanisms and guidelines for content safety |
| | are essential. Developers are encouraged to exercise caution and implement |
| | appropriate content safety safeguards based on their specific product policies |
| | and application use cases. |
| | * Misuse for malicious purposes: Technical limitations and developer and |
| | end-user education can help mitigate against malicious applications of LLMs. |
| | Educational resources and reporting mechanisms for users to flag misuse are |
| | provided. Prohibited uses of Gemma models are outlined in the |
| | [Gemma Prohibited Use Policy][prohibited-use]. |
| | * Privacy violations: Models were trained on data filtered for removal of PII |
| | (Personally Identifiable Information). Developers are encouraged to adhere to |
| | privacy regulations with privacy-preserving techniques. |
| |
|
| | ### Benefits |
| |
|
| | At the time of release, this family of models provides high-performance open |
| | large language model implementations designed from the ground up for Responsible |
| | AI development compared to similarly sized models. |
| |
|
| | Using the benchmark evaluation metrics described in this document, these models |
| | have shown to provide superior performance to other, comparably-sized open model |
| | alternatives. |
| |
|
| | [rai-toolkit]: https://ai.google.dev/responsible |
| | [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 |
| | [terms]: https://ai.google.dev/gemma/terms |
| | [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 |
| | [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference |
| | [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 |
| | [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy |
| | [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu |
| | [sustainability]: https://sustainability.google/operating-sustainably/ |
| | [jax]: https://github.com/google/jax |
| | [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ |
| | [sustainability]: https://sustainability.google/operating-sustainably/ |
| | [foundation-models]: https://ai.google/discover/foundation-models/ |
| | [gemini-2-paper]: https://goo.gle/gemma2report |
| | [mmlu]: https://arxiv.org/abs/2009.03300 |
| | [hellaswag]: https://arxiv.org/abs/1905.07830 |
| | [piqa]: https://arxiv.org/abs/1911.11641 |
| | [socialiqa]: https://arxiv.org/abs/1904.09728 |
| | [boolq]: https://arxiv.org/abs/1905.10044 |
| | [winogrande]: https://arxiv.org/abs/1907.10641 |
| | [commonsenseqa]: https://arxiv.org/abs/1811.00937 |
| | [openbookqa]: https://arxiv.org/abs/1809.02789 |
| | [arc]: https://arxiv.org/abs/1911.01547 |
| | [triviaqa]: https://arxiv.org/abs/1705.03551 |
| | [naturalq]: https://github.com/google-research-datasets/natural-questions |
| | [humaneval]: https://arxiv.org/abs/2107.03374 |
| | [mbpp]: https://arxiv.org/abs/2108.07732 |
| | [gsm8k]: https://arxiv.org/abs/2110.14168 |
| | [realtox]: https://arxiv.org/abs/2009.11462 |
| | [bold]: https://arxiv.org/abs/2101.11718 |
| | [crows]: https://aclanthology.org/2020.emnlp-main.154/ |
| | [bbq]: https://arxiv.org/abs/2110.08193v2 |
| | [winogender]: https://arxiv.org/abs/1804.09301 |
| | [truthfulqa]: https://arxiv.org/abs/2109.07958 |
| | [winobias]: https://arxiv.org/abs/1804.06876 |
| | [math]: https://arxiv.org/abs/2103.03874 |
| | [agieval]: https://arxiv.org/abs/2304.06364 |
| | [big-bench]: https://arxiv.org/abs/2206.04615 |
| | [toxigen]: https://arxiv.org/abs/2203.09509 |
| |
|