File size: 602 Bytes
7b0678f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | ---
license: gemma
library_name: transformers
---
# Gemma 2 9B 8-bit
This is an 8-bit quantized version of [Gemma 2 9B](https://huggingface.co/google/gemma-2-9b). __**The models belong to Google and are licensed under the Gemma Terms of Use**__ and are only stored in quantized form here for convenience.
## How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
dtype = torch.float16
model = AutoModelForCausalLM.from_pretrained("nev/gemma-2-9b-8bit", torch_dtype=dtype, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("nev/gemma-2-9b-8bit")
``` |