File size: 9,587 Bytes
483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 483ec8d c8f7ff7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 |
---
base_model: google/functiongemma-270m-it
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/functiongemma-270m-it
- lora
- sft
- transformers
- trl
- function-calling
- sports
- event-parser
- gemma
language:
- en
license: apache-2.0
---
# FunctionGemma 270M β Sports Event Parser
A lightweight LoRA fine-tune of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) that converts natural language sports event requests into structured `create_sports_event` function calls with proper ISO 8601 timestamps and timezone handling.
> *"Soccer this Friday 4pm @ Central Park"* β `{"sport": "Soccer", "venue_name": "Central Park", "start_time": "2026-02-13T16:00:00-05:00", ...}`
## Model Details
### Model Description
This adapter teaches a 270M-parameter Gemma model to act as a **function-calling layer** for sports event creation. Given a user's natural language request plus their timezone, the model outputs a JSON function call with the correct sport, venue, date (resolved from relative references like "tomorrow", "this Friday", "next Monday"), time in ISO 8601 with timezone offset, participant count, and event type.
- **Developed by:** [sarvkk](https://huggingface.co/sarvkk)
- **Model type:** Causal Language Model (LoRA adapter)
- **Language(s):** English
- **License:** Apache 2.0
- **Base model:** [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) (270M parameters)
- **Adapter size:** ~0.5% of base model parameters
### Key Features
- **Relative date resolution** β understands "tomorrow", "this Friday", "next Monday", "Saturday", etc.
- **Multi-timezone support** β outputs correct UTC offsets for America/New_York, America/Los_Angeles, America/Chicago, Europe/London, Asia/Tokyo, Australia/Sydney, and more
- **ISO 8601 timestamps** β e.g. `2026-02-07T16:00:00-05:00`
- **Structured output** β consistently produces valid JSON function calls
## How to Get Started
> **Important:** Must use `bfloat16` β Gemma's RMSNorm produces NaN in fp16.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
import json
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
BASE_MODEL = "google/functiongemma-270m-it"
ADAPTER_REPO = "sarvkk/funcgemma-event-parser-v2"
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load base + LoRA adapter
base_model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
device_map={"": device},
dtype=torch.bfloat16,
attn_implementation="eager",
low_cpu_mem_usage=True,
)
model = PeftModel.from_pretrained(base_model, ADAPTER_REPO, device_map={"": device})
tokenizer = AutoTokenizer.from_pretrained(ADAPTER_REPO)
model.eval()
# Function schema
FUNCTION_SCHEMA = {
"name": "create_sports_event",
"description": "Create a new sports event from natural language description",
"parameters": {
"type": "object",
"properties": {
"sport": {"type": "string", "description": "Type of sport"},
"venue_name": {"type": "string", "description": "Name of the venue"},
"start_time": {"type": "string", "description": "ISO 8601 with timezone"},
"max_participants": {"type": "integer", "default": 2},
"event_type": {
"type": "string",
"enum": ["Casual", "Light Training", "Looking to Improve", "Competitive Game"],
},
},
"required": ["sport", "venue_name", "start_time"],
},
}
# Build prompt with date context
now = datetime.now()
today_str = now.strftime("%Y-%m-%d")
today_day = now.strftime("%A")
current_time = now.strftime("%H:%M")
tomorrow_str = (now + timedelta(days=1)).strftime("%Y-%m-%d")
user_timezone = "America/New_York"
tz = ZoneInfo(user_timezone)
offset = now.replace(hour=12, tzinfo=tz).strftime("%z")
tz_offset = f"{offset[:3]}:{offset[3:]}"
user_query = "Soccer this Friday 4pm @ Central Park"
prompt = f"""<start_of_turn>user
Current date and time: {today_str} ({today_day}) at {current_time}
User timezone: {user_timezone} (UTC{tz_offset})
User request: {user_query}
Available functions:
{json.dumps([FUNCTION_SCHEMA], indent=2)}
Important:
- Calculate dates relative to {today_str}
- "tomorrow" = {tomorrow_str}
- "Friday" = the next upcoming Friday from {today_str}
- All times should be in ISO 8601 format with timezone offset
- Example: "{tomorrow_str}T16:00:00{tz_offset}"
<end_of_turn>
<start_of_turn>model
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=300,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
start = result.find("<function_call>") + len("<function_call>")
end = result.find("</function_call>")
if end == -1:
end = len(result)
parsed = json.loads(result[start:end].strip())
print(json.dumps(parsed, indent=2))
```
### Example Output
```json
{
"name": "create_sports_event",
"arguments": {
"sport": "Soccer",
"venue_name": "Central Park",
"start_time": "2026-02-13T16:00:00-05:00",
"max_participants": 22,
"event_type": "Casual"
}
}
```
## Training Details
### Training Data
Synthetically generated dataset of ~600 examples covering:
- **25 query templates** per reference date β varied sports (Soccer, Basketball, Tennis, Volleyball, Running, Swimming, Cycling, Golf, Hockey, Badminton, Yoga), venues, times, and phrasing styles
- **6 reference dates** across February 2026 (covering different days of the week)
- **4 user timezones**: America/New_York, America/Los_Angeles, America/Chicago, Europe/London
- **Additional timezone examples**: Asia/Tokyo, Australia/Sydney embedded in the training set
- **Event types**: Casual, Light Training, Looking to Improve, Competitive Game
Each example includes the current date context in the prompt so the model learns to resolve relative dates ("tomorrow", "this Friday", "next Monday") dynamically.
### Training Procedure
Fine-tuned using [TRL](https://github.com/huggingface/trl)'s `SFTTrainer` with LoRA adapters via [PEFT](https://github.com/huggingface/peft).
#### LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target modules | `q_proj`, `k_proj`, `v_proj`, `o_proj` |
| Bias | none |
| Task type | CAUSAL_LM |
#### Training Hyperparameters
| Parameter | Value |
|---|---|
| **Training regime** | bf16 (bfloat16 non-mixed precision) |
| Epochs | 2 |
| Batch size | 1 (per device) |
| Gradient accumulation steps | 8 (effective batch = 8) |
| Learning rate | 2e-4 |
| Warmup steps | 20 |
| Optimizer | AdamW (torch) |
| Max sequence length | 1024 |
| Save strategy | per epoch |
#### Speeds, Sizes, Times
| Metric | Value |
|---|---|
| Training steps | 144 |
| Training time | ~8 minutes |
| Adapter size | ~1.7 MB |
| Base model parameters | 270M |
| Trainable parameters | ~0.5% of base |
## Evaluation
### Testing Data
6 held-out queries with unseen venue names across 5 timezones, testing relative date resolution and diverse sports.
### Metrics
- **Parse success rate**: Whether the model output is valid JSON with the correct function name
- **Field accuracy**: Correct sport, venue, date, time, and timezone offset
### Results
**6/6 queries parsed successfully (100% parse rate)**
| Query | Timezone | Sport | Venue | Time | β |
|---|---|---|---|---|---|
| Soccer this Friday 4pm @ Washington Square Park | America/New_York | Soccer | Washington Square Park | 2026-02-13T16:00:00-05:00 | β |
| Basketball tomorrow 6pm at Barclays Center | America/New_York | Basketball | Barclays Center | 2026-02-07T18:00:00-05:00 | β |
| Beach volleyball Saturday 2pm at Santa Monica Beach | America/Los_Angeles | Beach Volleyball | Santa Monica Beach | 2026-02-07T14:00:00-08:00 | β |
| Pickup basketball Friday 6pm at Wrigley Field | America/Chicago | Basketball | Wrigley Field | 2026-02-07T18:00:00-06:00 | β |
| Football Saturday 3pm at Hampstead Heath | Europe/London | Football | Hampstead Heath | 2026-02-13T15:00:00+00:00 | β |
| Tennis next Monday 10am at Central Park Tennis Courts | America/New_York | Tennis | Central Park Tennis Courts | 2026-02-07T10:00:00-05:00 | β |
### Benchmark vs Gemma-2 2B
Compared against a [Gemma-2 2B LoRA adapter](https://huggingface.co/sarvkk/gemma-event-parser-v2) trained on the same task, on a Tesla T4 GPU:
| Metric | FuncGemma 270M | Gemma-2 2B | Winner |
|---|---|---|---|
| Model load time | 5.14s | 8.36s | **270M** (1.6Γ faster) |
| Avg inference time | 5.584s | 7.182s | **270M** (1.3Γ faster) |
| Tokens/sec | 20.1 | 15.6 | **270M** |
| Parse success | 6/6 | 6/6 | Tie |
The 270M model matches the 2B model on accuracy while being **1.3Γ faster at inference** and **1.6Γ faster to load**, with ~7Γ fewer parameters.
## Technical Specifications
### Model Architecture and Objective
- **Architecture:** Gemma 270M (decoder-only transformer) with LoRA adapters on attention projection layers
- **Objective:** Causal language modeling (next-token prediction) fine-tuned for structured function-call generation
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA Tesla T4 (15.8 GB VRAM)
- **Platform:** Lightning AI Studio
#### Software
- **Transformers:** latest
- **PEFT:** 0.18.1
- **TRL:** latest
- **PyTorch:** 2.x with CUDA
- **Python:** 3.12
## Framework Versions
- PEFT 0.18.1
|