CuriousDragon's picture
Upload README.md with huggingface_hub
62d4873 verified
metadata
base_model: google/functiongemma-270m-it
library_name: transformers
tags:
  - function-calling
  - agents
  - gemma
  - text-generation
  - tiny-agent
license: gemma
language:
  - en
pipeline_tag: text-generation

Tiny Agent: FunctionGemma-270m-IT (Fine-Tuned)

This is a fine-tuned version of google/functiongemma-270m-it optimized for reliable function calling. It was trained as part of the "Tiny Agent Lab" project to distill the capabilities of larger models into a highly efficient 270M parameter model.

Model Description

  • Model Type: Causal LM (Gemma)
  • Language(s): English
  • License: Gemma Terms of Use
  • Finetuned from: google/functiongemma-270m-it

Capabilities

This model is designed to:

  1. Detect User Intent: Accurately identify when a tool call is needed.
  2. Generate Function Calls: Output valid <start_function_call> XML/JSON blocks.
  3. Refuse Out-of-Scope Requests: Politely decline requests for which no tool is available.
  4. Ask Clarification: Request missing parameter values interactively.

Performance (V4 Evaluation)

On a held-out test set of 100 diverse queries:

  • Overall Accuracy: 71%
  • Tool Selection Precision: 88%
  • Tool Selection Recall: 94%
  • F1 Score: 0.91

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "CuriousDragon/functiongemma-270m-tiny-agent"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)

# ... (Add your inference code here)

Intended Use

This model is intended for research and educational purposes in building efficient agentic systems. It works best when provided with a clear system prompt defining the available tools.