Instructions to use gnumanth/xkcd-functiongemma with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use gnumanth/xkcd-functiongemma with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="gnumanth/xkcd-functiongemma") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("gnumanth/xkcd-functiongemma", dtype="auto") - PEFT
How to use gnumanth/xkcd-functiongemma with PEFT:
Task type is invalid.
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use gnumanth/xkcd-functiongemma with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "gnumanth/xkcd-functiongemma" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/xkcd-functiongemma", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/gnumanth/xkcd-functiongemma
- SGLang
How to use gnumanth/xkcd-functiongemma with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "gnumanth/xkcd-functiongemma" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/xkcd-functiongemma", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "gnumanth/xkcd-functiongemma" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/xkcd-functiongemma", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use gnumanth/xkcd-functiongemma with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for gnumanth/xkcd-functiongemma to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for gnumanth/xkcd-functiongemma to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for gnumanth/xkcd-functiongemma to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="gnumanth/xkcd-functiongemma", max_seq_length=2048, ) - Docker Model Runner
How to use gnumanth/xkcd-functiongemma with Docker Model Runner:
docker model run hf.co/gnumanth/xkcd-functiongemma
Use Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "gnumanth/xkcd-functiongemma" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "gnumanth/xkcd-functiongemma",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'XKCD FunctionGemma
A fine-tuned version of google/functiongemma-270m-it for XKCD comic search function calling.
Model Description
This model was fine-tuned to generate structured function calls for searching XKCD comics. Given a natural language query about comics, it outputs a properly formatted tool call that can be parsed and executed.
Base model: google/functiongemma-270m-it
Fine-tuning method: LoRA via Unsloth (1.4% trainable parameters)
Training data: 2,630 examples from olivierdehaene/xkcd
Training time: ~8 minutes on T4 GPU
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
import re
# Load model
model = AutoModelForCausalLM.from_pretrained(
"gnumanth/xkcd-functiongemma",
device_map="auto",
torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("gnumanth/xkcd-functiongemma")
# Define tools
TOOLS = [{
"type": "function",
"function": {
"name": "search_xkcd",
"description": "Search XKCD comics by topic",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
}
}
}]
# Generate function call
messages = [{"role": "user", "content": "Find xkcd about programming"}]
text = tokenizer.apply_chat_template(messages, tools=TOOLS, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=False)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Output: <start_function_call>call:search_xkcd{"query": "programming"}<end_function_call>
Parsing Function Calls
def parse_function_call(output: str) -> dict | None:
"""Extract function name and arguments from model output."""
match = re.search(r'call:(\w+)\s*\{(.+)\}', output, re.DOTALL)
if not match:
return None
func_name = match.group(1)
args_raw = match.group(2).strip()
# Handle double braces from training format
args_raw = re.sub(r'^\s*\{', '', args_raw)
if args_raw.endswith('}'):
args_raw = args_raw[:-1]
try:
return {"function": func_name, "arguments": json.loads('{' + args_raw + '}')}
except json.JSONDecodeError:
return None
# Usage
call = parse_function_call(response)
# {'function': 'search_xkcd', 'arguments': {'query': 'programming'}}
Training Details
- Epochs: 1
- Batch size: 2 (with 4 gradient accumulation steps)
- Learning rate: 2e-4
- LoRA rank: 16
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Final loss: 0.281
Limitations
- Only trained for XKCD search queries
- May produce double braces in output (handled by parser above)
- Small model (270M params) - limited reasoning capability
License
Apache 2.0 (same as base model)
Links
Model tree for gnumanth/xkcd-functiongemma
Base model
google/functiongemma-270m-it
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "gnumanth/xkcd-functiongemma" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/xkcd-functiongemma", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'