Evan88 commited on
Commit
ed70b58
·
verified ·
1 Parent(s): 22bb5b0

End of training

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-2b
3
+ library_name: transformers
4
+ model_name: gemma-finetuned
5
+ tags:
6
+ - generated_from_trainer
7
+ - Gemma2Lora
8
+ - trl
9
+ - sft
10
+ licence: license
11
+ ---
12
+
13
+ # Model Card for gemma-finetuned
14
+
15
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b).
16
+ It has been trained using [TRL](https://github.com/huggingface/trl).
17
+
18
+ ## Quick start
19
+
20
+ ```python
21
+ from transformers import pipeline
22
+
23
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
+ generator = pipeline("text-generation", model="Evan88/gemma-finetuned", device="cuda")
25
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
+ print(output["generated_text"])
27
+ ```
28
+
29
+ ## Training procedure
30
+
31
+
32
+
33
+
34
+ This model was trained with SFT.
35
+
36
+ ### Framework versions
37
+
38
+ - TRL: 0.16.0
39
+ - Transformers: 4.50.3
40
+ - Pytorch: 2.6.0
41
+ - Datasets: 3.5.0
42
+ - Tokenizers: 0.21.1
43
+
44
+ ## Citations
45
+
46
+
47
+
48
+ Cite TRL as:
49
+
50
+ ```bibtex
51
+ @misc{vonwerra2022trl,
52
+ title = {{TRL: Transformer Reinforcement Learning}},
53
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
54
+ year = 2020,
55
+ journal = {GitHub repository},
56
+ publisher = {GitHub},
57
+ howpublished = {\url{https://github.com/huggingface/trl}}
58
+ }
59
+ ```
adapter_config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "alpha_pattern": {},
3
  "auto_mapping": null,
4
- "base_model_name_or_path": null,
5
  "bias": "none",
6
  "corda_config": null,
7
  "eva_config": null,
 
1
  {
2
  "alpha_pattern": {},
3
  "auto_mapping": null,
4
+ "base_model_name_or_path": "google/gemma-2b",
5
  "bias": "none",
6
  "corda_config": null,
7
  "eva_config": null,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:40282bd2156278ae698ee4de7077e199eb521cfa70c8c1a1e26743a3127c35f6
3
- size 7383560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52b14e92a5ece209d9c92b3ac54668cc87b853c425031b23978e6be3dd763d65
3
+ size 2104550952
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:82a92949f17cbf85fd2215451a9acfa139cc0751791471b3cd394eda40ab62cd
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c11bf627c057f6905a3b19e6a258a4dc964067d6459c63f15c64b872897ef798
3
  size 5560