File size: 1,634 Bytes
bc6c81f 488a0ce bc6c81f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: gemma
base_model: google/functiongemma-270m-it
base_model_relation: finetune
datasets:
- acon96/Home-Assistant-Requests-V2
language:
- en
- de
- es
- fr
tags:
- automation
- home
- assistant
pipeline_tag: text-generation
---
# Home-FunctionGemma-270m
The "Home" model is a fine tuning of the FunctionGemma model from Google. The model is able to control devices in the user's house via the "Assist" API, as well as perform basic question answering about the provided home's state.
The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis.
## Training
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
### Datasets
Home Assistant Requests V2 - https://huggingface.co/datasets/acon96/Home-Assistant-Requests-V2
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 59
- training_steps: 597
## License
The model is licensed under the Gemma license as it is a fine-tuning of the FunctionGemma model.
|