Instructions to use InterSync/Gemma-7B-Instruct-Function-Calling with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use InterSync/Gemma-7B-Instruct-Function-Calling with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="InterSync/Gemma-7B-Instruct-Function-Calling") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("InterSync/Gemma-7B-Instruct-Function-Calling") model = AutoModelForCausalLM.from_pretrained("InterSync/Gemma-7B-Instruct-Function-Calling") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use InterSync/Gemma-7B-Instruct-Function-Calling with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "InterSync/Gemma-7B-Instruct-Function-Calling" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "InterSync/Gemma-7B-Instruct-Function-Calling", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/InterSync/Gemma-7B-Instruct-Function-Calling
- SGLang
How to use InterSync/Gemma-7B-Instruct-Function-Calling with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "InterSync/Gemma-7B-Instruct-Function-Calling" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "InterSync/Gemma-7B-Instruct-Function-Calling", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "InterSync/Gemma-7B-Instruct-Function-Calling" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "InterSync/Gemma-7B-Instruct-Function-Calling", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use InterSync/Gemma-7B-Instruct-Function-Calling with Docker Model Runner:
docker model run hf.co/InterSync/Gemma-7B-Instruct-Function-Calling
Use Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "InterSync/Gemma-7B-Instruct-Function-Calling" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "InterSync/Gemma-7B-Instruct-Function-Calling",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Model Page: Gemma
Fine-tuned Gemma with OpenAI Function Call Support
Finetuned version of Gemma 7B Instruct to support direct function calling. This new capability aligns with the functionality seen in OpenAI's models, enabling Gemma to interact with external data sources and perform more complex tasks, such as fetching real-time information or integrating with custom databases for enriched AI-powered applications.
Features
- Direct Function Calls: Gemma now supports structured function calls, allowing for the integration of external APIs and databases directly into the conversational flow. This makes it possible to execute custom searches, retrieve data from the web or specific databases, and even summarize or explain content in depth.
Fine-tuned Quantization Models
Updating:
Model Description
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
- Downloads last month
- 3
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "InterSync/Gemma-7B-Instruct-Function-Calling" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "InterSync/Gemma-7B-Instruct-Function-Calling", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'