--- language: - en license: apache-2.0 base_model: google/functiongemma-270m-it tags: - function-calling - mobile-actions - on-device - litertlm - edge-ai - android - gemma3 library_name: transformers pipeline_tag: text-generation --- # FunctionGemma 270M - Mobile Actions (LiteRT-LM Ready) A fine-tuned FunctionGemma 270M model optimized for on-device function calling on Android devices using [Google AI Edge Gallery](https://github.com/google-ai-edge/gallery) and LiteRT-LM runtime. ## 🎯 Features - ✅ **Ready-to-use**: Pre-converted `.litertlm` format for immediate deployment - ✅ **On-device function calling**: Runs entirely on Android devices without internet - ✅ **Optimized**: INT8 quantization (~271 MB) for efficient mobile deployment - ✅ **Mobile Actions**: Supports 6 native Android functions - ✅ **Low latency**: Optimized with extended KV cache (1024 tokens) ## 📱 Supported Mobile Actions The model can execute the following Android functions via natural language: | Function | Example Prompt | |----------|---------------| | **Flashlight** | "Turn on the flashlight" | | **Contacts** | "Create a contact for John Doe with phone 555-1234" | | **Email** | "Send email to john@example.com" | | **Maps** | "Show Times Square on the map" | | **WiFi** | "Turn off WiFi" | | **Calendar** | "Create a calendar event for Team Meeting tomorrow at 2 PM" | ## 🚀 Quick Start ### Download the Model ```bash wget https://huggingface.co/Yagna1/functiongemma-270m-mobile-actions/resolve/main/mobile-actions_q8_ekv1024.litertlm ``` Or use Python: ```python from huggingface_hub import hf_hub_download model_path = hf_hub_download( repo_id="Yagna1/functiongemma-270m-mobile-actions", filename="mobile-actions_q8_ekv1024.litertlm" ) print(f"Downloaded to: {model_path}") ``` ### Use in Google AI Edge Gallery App 1. **Install** the [Google AI Edge Gallery](https://github.com/google-ai-edge/gallery) Android app 2. **Import** the `mobile-actions_q8_ekv1024.litertlm` file into the app 3. **Navigate** to "Mobile Actions" feature 4. **Test** with natural language prompts like: - "Turn on flashlight" - "Create contact John Smith" - "Show Central Park on map" ## 🏗️ Model Architecture - **Base Model**: [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) - **Architecture**: Gemma 3 (270M parameters) - **Quantization**: INT8 (Dynamic) - **KV Cache**: Extended to 1024 tokens for longer conversations - **Runtime**: LiteRT-LM (Google's on-device inference engine) ## 📊 Model Details | Property | Value | |----------|-------| | Parameters | 270M | | Quantization | INT8 | | Model Size | 271 MB | | Format | `.litertlm` (LiteRT-LM) | | Context Length | 1024 tokens | | Target Device | Android (ARM) | ## 🔧 Function Calling Format The model uses LiteRT-LM's native function calling format: ``` call:function_name{param1:value1,param2:value2} ``` Example outputs: **User**: "Turn on the flashlight" **Model**: `call:enableFlashlight{}` **User**: "Create contact John Doe with phone 555-1234" **Model**: `call:createContact{contactName:John Doe,phoneNumber:555-1234}` ## 📚 Training Details This model was fine-tuned on synthetic Mobile Actions data designed to match LiteRT-LM's expected function calling format. The training focused on: - Natural language → function call mapping - Parameter extraction from user queries - Handling edge cases and variations - Multi-turn conversation support ## ⚠️ Limitations - Limited to 6 pre-defined Android functions - English language only - Requires Android device with ARMv8-A or newer - May not handle complex multi-step actions - Function parameters must match expected schema ## 🤝 Credits **Original Model**: This is a mirror/re-upload of [JackJ1/functiongemma-270m-it-mobile-actions-litertlm](https://huggingface.co/JackJ1/functiongemma-270m-it-mobile-actions-litertlm) **Thanks to**: - **JackJ1** for the original fine-tuning work - **Google** for FunctionGemma base model and LiteRT-LM runtime - **Google AI Edge Team** for the Gallery app and tools ## 📄 License Apache 2.0 (same as base FunctionGemma model) ## 🔗 Resources - [Google AI Edge Gallery GitHub](https://github.com/google-ai-edge/gallery) - [FunctionGemma Documentation](https://ai.google.dev/gemma/docs/function_calling) - [LiteRT-LM Runtime](https://github.com/google-ai-edge/LiteRT-LM) - [Original Base Model](https://huggingface.co/google/functiongemma-270m-it) ## 📞 Contact For issues or questions about this model mirror, please open an issue on the [repository](https://huggingface.co/Yagna1/functiongemma-270m-mobile-actions/discussions). --- **Note**: This model is specifically formatted for the Google AI Edge Gallery app and requires the LiteRT-LM runtime. For general-purpose inference, use the base model or convert to standard formats.