base_model: google/functiongemma-270m-it
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/functiongemma-270m-it
- lora
- sft
- transformers
- trl
- function-calling
- sports
- event-parser
- gemma
language:
- en
license: apache-2.0
FunctionGemma 270M β Sports Event Parser
A lightweight LoRA fine-tune of google/functiongemma-270m-it that converts natural language sports event requests into structured create_sports_event function calls with proper ISO 8601 timestamps and timezone handling.
"Soccer this Friday 4pm @ Central Park" β
{"sport": "Soccer", "venue_name": "Central Park", "start_time": "2026-02-13T16:00:00-05:00", ...}
Model Details
Model Description
This adapter teaches a 270M-parameter Gemma model to act as a function-calling layer for sports event creation. Given a user's natural language request plus their timezone, the model outputs a JSON function call with the correct sport, venue, date (resolved from relative references like "tomorrow", "this Friday", "next Monday"), time in ISO 8601 with timezone offset, participant count, and event type.
- Developed by: sarvkk
- Model type: Causal Language Model (LoRA adapter)
- Language(s): English
- License: Apache 2.0
- Base model: google/functiongemma-270m-it (270M parameters)
- Adapter size: ~0.5% of base model parameters
Key Features
- Relative date resolution β understands "tomorrow", "this Friday", "next Monday", "Saturday", etc.
- Multi-timezone support β outputs correct UTC offsets for America/New_York, America/Los_Angeles, America/Chicago, Europe/London, Asia/Tokyo, Australia/Sydney, and more
- ISO 8601 timestamps β e.g.
2026-02-07T16:00:00-05:00 - Structured output β consistently produces valid JSON function calls
How to Get Started
Important: Must use
bfloat16β Gemma's RMSNorm produces NaN in fp16.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
import json
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
BASE_MODEL = "google/functiongemma-270m-it"
ADAPTER_REPO = "sarvkk/funcgemma-event-parser-v2"
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load base + LoRA adapter
base_model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
device_map={"": device},
dtype=torch.bfloat16,
attn_implementation="eager",
low_cpu_mem_usage=True,
)
model = PeftModel.from_pretrained(base_model, ADAPTER_REPO, device_map={"": device})
tokenizer = AutoTokenizer.from_pretrained(ADAPTER_REPO)
model.eval()
# Function schema
FUNCTION_SCHEMA = {
"name": "create_sports_event",
"description": "Create a new sports event from natural language description",
"parameters": {
"type": "object",
"properties": {
"sport": {"type": "string", "description": "Type of sport"},
"venue_name": {"type": "string", "description": "Name of the venue"},
"start_time": {"type": "string", "description": "ISO 8601 with timezone"},
"max_participants": {"type": "integer", "default": 2},
"event_type": {
"type": "string",
"enum": ["Casual", "Light Training", "Looking to Improve", "Competitive Game"],
},
},
"required": ["sport", "venue_name", "start_time"],
},
}
# Build prompt with date context
now = datetime.now()
today_str = now.strftime("%Y-%m-%d")
today_day = now.strftime("%A")
current_time = now.strftime("%H:%M")
tomorrow_str = (now + timedelta(days=1)).strftime("%Y-%m-%d")
user_timezone = "America/New_York"
tz = ZoneInfo(user_timezone)
offset = now.replace(hour=12, tzinfo=tz).strftime("%z")
tz_offset = f"{offset[:3]}:{offset[3:]}"
user_query = "Soccer this Friday 4pm @ Central Park"
prompt = f"""<start_of_turn>user
Current date and time: {today_str} ({today_day}) at {current_time}
User timezone: {user_timezone} (UTC{tz_offset})
User request: {user_query}
Available functions:
{json.dumps([FUNCTION_SCHEMA], indent=2)}
Important:
- Calculate dates relative to {today_str}
- "tomorrow" = {tomorrow_str}
- "Friday" = the next upcoming Friday from {today_str}
- All times should be in ISO 8601 format with timezone offset
- Example: "{tomorrow_str}T16:00:00{tz_offset}"
<end_of_turn>
<start_of_turn>model
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=300,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
start = result.find("<function_call>") + len("<function_call>")
end = result.find("</function_call>")
if end == -1:
end = len(result)
parsed = json.loads(result[start:end].strip())
print(json.dumps(parsed, indent=2))
Example Output
{
"name": "create_sports_event",
"arguments": {
"sport": "Soccer",
"venue_name": "Central Park",
"start_time": "2026-02-13T16:00:00-05:00",
"max_participants": 22,
"event_type": "Casual"
}
}
Training Details
Training Data
Synthetically generated dataset of ~600 examples covering:
- 25 query templates per reference date β varied sports (Soccer, Basketball, Tennis, Volleyball, Running, Swimming, Cycling, Golf, Hockey, Badminton, Yoga), venues, times, and phrasing styles
- 6 reference dates across February 2026 (covering different days of the week)
- 4 user timezones: America/New_York, America/Los_Angeles, America/Chicago, Europe/London
- Additional timezone examples: Asia/Tokyo, Australia/Sydney embedded in the training set
- Event types: Casual, Light Training, Looking to Improve, Competitive Game
Each example includes the current date context in the prompt so the model learns to resolve relative dates ("tomorrow", "this Friday", "next Monday") dynamically.
Training Procedure
Fine-tuned using TRL's SFTTrainer with LoRA adapters via PEFT.
LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target modules | q_proj, k_proj, v_proj, o_proj |
| Bias | none |
| Task type | CAUSAL_LM |
Training Hyperparameters
| Parameter | Value |
|---|---|
| Training regime | bf16 (bfloat16 non-mixed precision) |
| Epochs | 2 |
| Batch size | 1 (per device) |
| Gradient accumulation steps | 8 (effective batch = 8) |
| Learning rate | 2e-4 |
| Warmup steps | 20 |
| Optimizer | AdamW (torch) |
| Max sequence length | 1024 |
| Save strategy | per epoch |
Speeds, Sizes, Times
| Metric | Value |
|---|---|
| Training steps | 144 |
| Training time | ~8 minutes |
| Adapter size | ~1.7 MB |
| Base model parameters | 270M |
| Trainable parameters | ~0.5% of base |
Evaluation
Testing Data
6 held-out queries with unseen venue names across 5 timezones, testing relative date resolution and diverse sports.
Metrics
- Parse success rate: Whether the model output is valid JSON with the correct function name
- Field accuracy: Correct sport, venue, date, time, and timezone offset
Results
6/6 queries parsed successfully (100% parse rate)
| Query | Timezone | Sport | Venue | Time | β |
|---|---|---|---|---|---|
| Soccer this Friday 4pm @ Washington Square Park | America/New_York | Soccer | Washington Square Park | 2026-02-13T16:00:00-05:00 | β |
| Basketball tomorrow 6pm at Barclays Center | America/New_York | Basketball | Barclays Center | 2026-02-07T18:00:00-05:00 | β |
| Beach volleyball Saturday 2pm at Santa Monica Beach | America/Los_Angeles | Beach Volleyball | Santa Monica Beach | 2026-02-07T14:00:00-08:00 | β |
| Pickup basketball Friday 6pm at Wrigley Field | America/Chicago | Basketball | Wrigley Field | 2026-02-07T18:00:00-06:00 | β |
| Football Saturday 3pm at Hampstead Heath | Europe/London | Football | Hampstead Heath | 2026-02-13T15:00:00+00:00 | β |
| Tennis next Monday 10am at Central Park Tennis Courts | America/New_York | Tennis | Central Park Tennis Courts | 2026-02-07T10:00:00-05:00 | β |
Benchmark vs Gemma-2 2B
Compared against a Gemma-2 2B LoRA adapter trained on the same task, on a Tesla T4 GPU:
| Metric | FuncGemma 270M | Gemma-2 2B | Winner |
|---|---|---|---|
| Model load time | 5.14s | 8.36s | 270M (1.6Γ faster) |
| Avg inference time | 5.584s | 7.182s | 270M (1.3Γ faster) |
| Tokens/sec | 20.1 | 15.6 | 270M |
| Parse success | 6/6 | 6/6 | Tie |
The 270M model matches the 2B model on accuracy while being 1.3Γ faster at inference and 1.6Γ faster to load, with ~7Γ fewer parameters.
Technical Specifications
Model Architecture and Objective
- Architecture: Gemma 270M (decoder-only transformer) with LoRA adapters on attention projection layers
- Objective: Causal language modeling (next-token prediction) fine-tuned for structured function-call generation
Compute Infrastructure
Hardware
- GPU: NVIDIA Tesla T4 (15.8 GB VRAM)
- Platform: Lightning AI Studio
Software
- Transformers: latest
- PEFT: 0.18.1
- TRL: latest
- PyTorch: 2.x with CUDA
- Python: 3.12
Framework Versions
- PEFT 0.18.1