Instructions to use google/gemma-2-27b-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/gemma-2-27b-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="google/gemma-2-27b-it") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it") model = AutoModelForCausalLM.from_pretrained("google/gemma-2-27b-it") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/gemma-2-27b-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/gemma-2-27b-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-27b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/google/gemma-2-27b-it
- SGLang
How to use google/gemma-2-27b-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/gemma-2-27b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-27b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/gemma-2-27b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-2-27b-it", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use google/gemma-2-27b-it with Docker Model Runner:
docker model run hf.co/google/gemma-2-27b-it
Build Real-Time Conversational Agents with 56+ Models via NexaAPI (Gemini Flash Alternative)
#44
by nickyni - opened
Build Real-Time Conversational Agents with 56+ Models via NexaAPI
Inspired by Google's Gemini 2.5 Flash Live API launch — here's how to build the same real-time conversational agent with access to GPT-4o, Claude, Llama, and 56+ models via one unified API.
Quick Start
from openai import OpenAI
# One line change from OpenAI SDK — same code, 56+ models
client = OpenAI(
api_key="YOUR_NEXA_API_KEY",
base_url="https://api.nexaapi.com/v1"
)
# Streaming conversational agent
stream = client.chat.completions.create(
model="gpt-4o", # swap: claude-3-5-sonnet, llama-3.3-70b, mistral-large...
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello! Tell me about real-time AI."}
],
stream=True,
max_tokens=300
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Model Switcher — One Line Change
# Try different models with identical code:
for model in ["gpt-4o", "claude-3-5-sonnet-20241022", "llama-3.3-70b-instruct"]:
response = client.chat.completions.create(
model=model, # ← only this line changes
messages=[{"role": "user", "content": "What makes a great conversational AI?"}],
max_tokens=100
)
print(f"{model}: {response.choices[0].message.content[:150]}")
Pricing Comparison
| Provider | GPT-4o | Claude 3.5 | Llama 3.3 |
|---|---|---|---|
| Official | $2.50/M tokens | $3.00/M tokens | N/A |
| NexaAPI | Up to 5× cheaper | Up to 5× cheaper | Cheapest |
Resources
- 🌐 NexaAPI: https://nexa-api.com (free tier, no credit card)
- 🐙 GitHub: https://github.com/diwushennian4955/realtime-conversational-agent-nexaapi
- 📖 Blog: https://nexa-api.com/blog/gemini-live-conversational-agent-nexaapi
- 📓 Colab: Full notebook with model comparison and cost estimator
Hi @nickyni -
Thank you for sharing this with the community. This is a well-structured and practical showcase, and the unified API approach is particularly compelling.