Instructions to use google/gemma-2-9b-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/gemma-2-9b-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="google/gemma-2-9b-it") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2-9b-it") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/gemma-2-9b-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/gemma-2-9b-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-9b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/google/gemma-2-9b-it
- SGLang
How to use google/gemma-2-9b-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/gemma-2-9b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-9b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/gemma-2-9b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-9b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use google/gemma-2-9b-it with Docker Model Runner:
docker model run hf.co/google/gemma-2-9b-it
Update?
I just browsed HF looking for new exciting releases, sorted by last update and here is our capable gemma again!
Oh, I said, lets take a look what improvement over already good model dropped. Alas, they added one just line in readme:
base_model: google/gemma-2-9b
Guys, why! People coming here are not of facebook populace type. You undermine your credibility. Don't be evil.
Hi @Neman , Sorry for the late response,
This happens because any change to a model's repository on Hugging Face, including minor edits to the README file, registers as an "update," which affects its position when sorting by "last updated."
To avoid this, instead of relying only on the "last updated" sort, a more effective method is to check the "Commits" tab in the "Files and versions" section of a repository.
This will show you exactly what was changed, so you can easily distinguish a major model upload from a minor documentation fix. Thank you.
Yes, yes, without that ("instead of relying only on the "last updated" sort, a more effective method is to check the "Commits" tab in the "Files and versions" section of a repository") I couldn't write this: "they added one just line in readme: base_model: google/gemma-2-9b"
I was commenting on recent trend of pushing models upward on "last updated" sort with changing few characters in readme.
Without any personal presumptions, it could look like cheap marketing move to someone. So that's why this comment in good faith. Stay credible, your company invest so much in its brand.