Instructions to use google/gemma-2-27b-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/gemma-2-27b-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="google/gemma-2-27b-it") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2-27b-it") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/gemma-2-27b-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/gemma-2-27b-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-27b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/google/gemma-2-27b-it
- SGLang
How to use google/gemma-2-27b-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/gemma-2-27b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-27b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/gemma-2-27b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-27b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use google/gemma-2-27b-it with Docker Model Runner:
docker model run hf.co/google/gemma-2-27b-it
JAX/Flax Implementation
DeepMind's Gemma implementation does not seem to have been updated in accordance with the new release.
Are there any plans to release the JAX/Flax implementation and model?
There is! Our focus was on getting the weights out properly. For my own curiosity why are you interested in flax/jax in particular?
For my own curiosity why are you interested in flax/jax in particular?
I think using TPU is the most cost-effective way to full fine-tune the 27B model.
Additionally, the JAX/Flax implementation is good to use as a reference implementation. Last time, in Gemma 1, DeepMind's implementation was the only one without bugs.
There is! Our focus was on getting the weights out properly. For my own curiosity why are you interested in flax/jax in particular?
@canyon289 This would be very convenient. I want to integrate with our JORA library (Jax centered LLM PEFT finetuning). I believe the only differences from Gemma 1/1.1 are
- Logit softcaps,
- Sliding Window Attention, and
- query normalization
Plus, the weights in Flax format (i.e. orbax.checkpoint)
Thank you both for the answers. There's a couple of other changes such as GQA! Regardless its still being worked on, it should be out soonish. My apologies for the delay
We haven't forgotten about this. We're making some final changes and its on its way to release
Its updated! Check out it folks. Hope you enjoy the models
@canyon289 Hi, could you check where the implementation with jax/flax of the model? I couldn't find python code related with gemma 2 implementation, rather, there are only weight files on Kaggle.
The official JAX repo has the configurations for Gemma 2: https://github.com/google-deepmind/gemma/blob/main/gemma/transformer.py
For those asking about API access — I've been using Crazyrouter as a unified gateway. One API key, OpenAI SDK compatible. Works well for testing different models without managing multiple accounts.