gue22 commited on
Commit
0cf2be3
·
verified ·
1 Parent(s): 7c146bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -4
README.md CHANGED
@@ -1,8 +1,62 @@
1
  ---
2
  base_model: google/functiongemma-270m-it
 
 
3
  tags:
4
- - function-calling
5
- - mobile-actions
6
- - gemma
 
7
  ---
8
- A fine-tuned model based on `google/functiongemma-270m-it`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: google/functiongemma-270m-it
3
+ library_name: transformers
4
+ model_name: funcgemma-mobile-actions
5
  tags:
6
+ - generated_from_trainer
7
+ - sft
8
+ - trl
9
+ licence: license
10
  ---
11
+
12
+ # Model Card for funcgemma-mobile-actions
13
+
14
+ This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl).
15
+
16
+ Training was done fully local on a PC with a 32GB Nvidia RTX Pro 4500 GPU (comparable to an RTX 5080) and took roughly 25 mins.
17
+ The script was derived from the [Google Colab example](https://github.com/google-gemini/gemma-cookbook/tree/main/FunctionGemma) and is available at [ai-bits.org's FunctionGemma repo](https://github.com/ai-bits/functiongemma).
18
+
19
+ For the time being the litertlm model conversion for edge use (Andoid,..) will be available in a subdirectory here.
20
+
21
+ ## Quick start
22
+
23
+ ```python
24
+ from transformers import pipeline
25
+
26
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
27
+ generator = pipeline("text-generation", model="None", device="cuda")
28
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
29
+ print(output["generated_text"])
30
+ ```
31
+
32
+ ## Training procedure
33
+
34
+
35
+
36
+
37
+ This model was trained with SFT.
38
+
39
+ ### Framework versions
40
+
41
+ - TRL: 0.25.1
42
+ - Transformers: 4.57.1
43
+ - Pytorch: 2.9.1
44
+ - Datasets: 4.4.1
45
+ - Tokenizers: 0.22.1
46
+
47
+ ## Citations
48
+
49
+
50
+
51
+ Cite TRL as:
52
+
53
+ ```bibtex
54
+ @misc{vonwerra2022trl,
55
+ title = {{TRL: Transformer Reinforcement Learning}},
56
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
57
+ year = 2020,
58
+ journal = {GitHub repository},
59
+ publisher = {GitHub},
60
+ howpublished = {\url{https://github.com/huggingface/trl}}
61
+ }
62
+ ```