Yagna1's picture
Update README.md
feaded9 verified
metadata
language:
  - en
license: apache-2.0
base_model: google/functiongemma-270m-it
tags:
  - function-calling
  - mobile-actions
  - on-device
  - litertlm
  - edge-ai
  - android
  - gemma3
library_name: transformers
pipeline_tag: text-generation

FunctionGemma 270M - Mobile Actions (LiteRT-LM Ready)

A fine-tuned FunctionGemma 270M model optimized for on-device function calling on Android devices using Google AI Edge Gallery and LiteRT-LM runtime.

🎯 Features

  • βœ… Ready-to-use: Pre-converted .litertlm format for immediate deployment
  • βœ… On-device function calling: Runs entirely on Android devices without internet
  • βœ… Optimized: INT8 quantization (~271 MB) for efficient mobile deployment
  • βœ… Mobile Actions: Supports 6 native Android functions
  • βœ… Low latency: Optimized with extended KV cache (1024 tokens)

πŸ“± Supported Mobile Actions

The model can execute the following Android functions via natural language:

Function Example Prompt
Flashlight "Turn on the flashlight"
Contacts "Create a contact for John Doe with phone 555-1234"
Email "Send email to john@example.com"
Maps "Show Times Square on the map"
WiFi "Turn off WiFi"
Calendar "Create a calendar event for Team Meeting tomorrow at 2 PM"

πŸš€ Quick Start

Download the Model

wget https://huggingface.co/Yagna1/functiongemma-270m-mobile-actions/resolve/main/mobile-actions_q8_ekv1024.litertlm

Or use Python:

from huggingface_hub import hf_hub_download

model_path = hf_hub_download(
    repo_id="Yagna1/functiongemma-270m-mobile-actions",
    filename="mobile-actions_q8_ekv1024.litertlm"
)
print(f"Downloaded to: {model_path}")

Use in Google AI Edge Gallery App

  1. Install the Google AI Edge Gallery Android app
  2. Import the mobile-actions_q8_ekv1024.litertlm file into the app
  3. Navigate to "Mobile Actions" feature
  4. Test with natural language prompts like:
    • "Turn on flashlight"
    • "Create contact John Smith"
    • "Show Central Park on map"

πŸ—οΈ Model Architecture

  • Base Model: google/functiongemma-270m-it
  • Architecture: Gemma 3 (270M parameters)
  • Quantization: INT8 (Dynamic)
  • KV Cache: Extended to 1024 tokens for longer conversations
  • Runtime: LiteRT-LM (Google's on-device inference engine)

πŸ“Š Model Details

Property Value
Parameters 270M
Quantization INT8
Model Size 271 MB
Format .litertlm (LiteRT-LM)
Context Length 1024 tokens
Target Device Android (ARM)

πŸ”§ Function Calling Format

The model uses LiteRT-LM's native function calling format:

<start_function_call>call:function_name{param1:value1,param2:value2}<end_function_call>

Example outputs:

User: "Turn on the flashlight"
Model: <start_function_call>call:enableFlashlight{}<end_function_call>

User: "Create contact John Doe with phone 555-1234"
Model: <start_function_call>call:createContact{contactName:John Doe,phoneNumber:555-1234}<end_function_call>

πŸ“š Training Details

This model was fine-tuned on synthetic Mobile Actions data designed to match LiteRT-LM's expected function calling format. The training focused on:

  • Natural language β†’ function call mapping
  • Parameter extraction from user queries
  • Handling edge cases and variations
  • Multi-turn conversation support

⚠️ Limitations

  • Limited to 6 pre-defined Android functions
  • English language only
  • Requires Android device with ARMv8-A or newer
  • May not handle complex multi-step actions
  • Function parameters must match expected schema

🀝 Credits

Original Model: This is a mirror/re-upload of JackJ1/functiongemma-270m-it-mobile-actions-litertlm

Thanks to:

  • JackJ1 for the original fine-tuning work
  • Google for FunctionGemma base model and LiteRT-LM runtime
  • Google AI Edge Team for the Gallery app and tools

πŸ“„ License

Apache 2.0 (same as base FunctionGemma model)

πŸ”— Resources

πŸ“ž Contact

For issues or questions about this model mirror, please open an issue on the repository.


Note: This model is specifically formatted for the Google AI Edge Gallery app and requires the LiteRT-LM runtime. For general-purpose inference, use the base model or convert to standard formats.