| ---
|
| base_model: google/functiongemma-270m-it
|
| tags:
|
| - function-calling
|
| - mobile-actions
|
| - gemma
|
| - flashlight
|
| language:
|
| - en
|
| license: gemma
|
| ---
|
|
|
| # FunctionGemma 270M — Mobile Actions (Flashlight)
|
|
|
| Fine-tuned from [`google/functiongemma-270m-it`](https://huggingface.co/google/functiongemma-270m-it) on the
|
| [Google Mobile Actions](https://huggingface.co/datasets/google/mobile-actions) dataset,
|
| filtered to **flashlight** related samples (1,509 train / 175 eval).
|
|
|
| ## Training Details
|
|
|
| | Setting | Value |
|
| |--------------------|---------------------------|
|
| | Base model | google/functiongemma-270m-it |
|
| | Dataset | google/mobile-actions |
|
| | Filter | flashlight samples only |
|
| | Train samples | 1,509 |
|
| | Eval samples | 175 |
|
| | Epochs | 2 |
|
| | Batch size | 4 (effective 32) |
|
| | Optimizer | AdamW (fused) |
|
| | Precision | BF16 + TF32 |
|
| | Train loss | 0.023 |
|
| | Eval loss | 0.0084 |
|
| | Token accuracy | 99.72% |
|
|
|
| ## Usage
|
|
|
| ```python
|
| from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
|
|
|
| model = AutoModelForCausalLM.from_pretrained("arunkumar629/functiongemma-270m-it-mobile-actions")
|
| tokenizer = AutoTokenizer.from_pretrained("arunkumar629/functiongemma-270m-it-mobile-actions")
|
| pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
| ```
|
|
|