AyaKhaled commited on
Commit
acd2ea7
·
verified ·
1 Parent(s): 77918b0

Model save

Browse files
Files changed (2) hide show
  1. README.md +38 -39
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,59 +1,58 @@
1
  ---
2
- library_name: peft
3
- license: llama3.2
4
- base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
5
  tags:
 
6
  - trl
7
  - sft
8
- - generated_from_trainer
9
- model-index:
10
- - name: checkpoints
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # checkpoints
18
-
19
- This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset.
20
 
21
- ## Model description
 
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
 
26
 
27
- More information needed
 
 
 
 
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
32
 
33
- ## Training procedure
34
 
35
- ### Training hyperparameters
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 0.0002
39
- - train_batch_size: 1
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - gradient_accumulation_steps: 8
43
- - total_train_batch_size: 8
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
- - lr_scheduler_type: constant
46
- - lr_scheduler_warmup_ratio: 0.03
47
- - num_epochs: 1
48
 
49
- ### Training results
 
 
 
 
50
 
 
51
 
52
 
53
- ### Framework versions
54
 
55
- - PEFT 0.13.0
56
- - Transformers 4.45.1
57
- - Pytorch 2.4.0+cu121
58
- - Datasets 3.0.1
59
- - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: google/gemma-3-4b-it
3
+ library_name: transformers
4
+ model_name: checkpoints
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for checkpoints
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="AyaKhaled/checkpoints", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
+ ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aya-k-yousef-wakeb_data/gemma-3-4b-it/runs/khum9dvf)
31
 
 
32
 
33
+ This model was trained with SFT.
34
 
35
+ ### Framework versions
 
 
 
 
 
 
 
 
 
 
36
 
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.51.2
39
+ - Pytorch: 2.6.0
40
+ - Datasets: 3.0.1
41
+ - Tokenizers: 0.21.1
42
 
43
+ ## Citations
44
 
45
 
 
46
 
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:67d175651f6a86ae2c611ff90d25a98dd58c3309661cbcd3977e2122df6dce4b
3
  size 12931568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2227e658f143dd17f59fb9b4636cc664d89037cafc97c1ee3767824f220d492
3
  size 12931568