userdavek commited on
Commit
d2e85ab
·
verified ·
1 Parent(s): 52ca534

amharic_lamma_sum_lora_normalized

Browse files
Files changed (4) hide show
  1. README.md +48 -38
  2. adapter_config.json +3 -12
  3. adapter_model.safetensors +2 -2
  4. training_args.bin +2 -2
README.md CHANGED
@@ -1,58 +1,68 @@
1
  ---
 
 
2
  base_model: meta-llama/Llama-3.1-8B-Instruct
3
- library_name: transformers
4
- model_name: amharic_llama_sum_lora_normalized
5
  tags:
6
- - generated_from_trainer
7
- - sft
8
  - trl
9
- licence: license
 
 
 
 
10
  ---
11
 
12
- # Model Card for amharic_llama_sum_lora_normalized
 
13
 
14
- This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
 
 
18
 
19
- ```python
20
- from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="userdavek/amharic_llama_sum_lora_normalized", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
27
 
28
- ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/userdavek-wollo-university/llama_for_amh_sum_normalized/runs/ntgrpoo3)
31
 
 
32
 
33
- This model was trained with SFT.
34
 
35
- ### Framework versions
 
 
36
 
37
- - TRL: 0.26.2
38
- - Transformers: 4.57.3
39
- - Pytorch: 2.8.0+cu128
40
- - Datasets: 4.4.2
41
- - Tokenizers: 0.22.1
 
 
 
 
 
42
 
43
- ## Citations
44
 
 
 
 
 
 
 
 
 
45
 
46
 
47
- Cite TRL as:
48
-
49
- ```bibtex
50
- @misc{vonwerra2022trl,
51
- title = {{TRL: Transformer Reinforcement Learning}},
52
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
- year = 2020,
54
- journal = {GitHub repository},
55
- publisher = {GitHub},
56
- howpublished = {\url{https://github.com/huggingface/trl}}
57
- }
58
- ```
 
1
  ---
2
+ library_name: peft
3
+ license: llama3.1
4
  base_model: meta-llama/Llama-3.1-8B-Instruct
 
 
5
  tags:
 
 
6
  - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: amharic_llama_sum_lora_normalized
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # amharic_llama_sum_lora_normalized
 
18
 
19
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.3919
22
 
23
+ ## Model description
 
24
 
25
+ More information needed
 
 
 
 
26
 
27
+ ## Intended uses & limitations
28
 
29
+ More information needed
30
 
31
+ ## Training and evaluation data
32
 
33
+ More information needed
34
 
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
 
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 16
45
+ - total_train_batch_size: 128
46
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: cosine
48
+ - num_epochs: 3
49
 
50
+ ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|
54
+ | 0.4711 | 0.4913 | 400 | 0.4692 |
55
+ | 0.4225 | 0.9826 | 800 | 0.4294 |
56
+ | 0.4064 | 1.4729 | 1200 | 0.4098 |
57
+ | 0.3946 | 1.9642 | 1600 | 0.3988 |
58
+ | 0.3774 | 2.4544 | 2000 | 0.3934 |
59
+ | 0.3834 | 2.9457 | 2400 | 0.3919 |
60
 
61
 
62
+ ### Framework versions
63
+
64
+ - PEFT 0.14.1.dev0
65
+ - Transformers 4.57.1
66
+ - Pytorch 2.6.0+cu124
67
+ - Datasets 3.2.0
68
+ - Tokenizers 0.22.1
 
 
 
 
 
adapter_config.json CHANGED
@@ -1,12 +1,9 @@
1
  {
2
- "alora_invocation_tokens": null,
3
  "alpha_pattern": {},
4
- "arrow_config": null,
5
  "auto_mapping": null,
6
  "base_model_name_or_path": "meta-llama/Llama-3.1-8B-Instruct",
7
  "bias": "none",
8
  "corda_config": null,
9
- "ensure_weight_tying": false,
10
  "eva_config": null,
11
  "exclude_modules": null,
12
  "fan_in_fan_out": false,
@@ -23,24 +20,18 @@
23
  "megatron_core": "megatron.core",
24
  "modules_to_save": null,
25
  "peft_type": "LORA",
26
- "peft_version": "0.18.0",
27
- "qalora_group_size": 16,
28
  "r": 16,
29
  "rank_pattern": {},
30
  "revision": null,
31
  "target_modules": [
32
- "up_proj",
33
- "o_proj",
34
  "v_proj",
35
- "gate_proj",
36
  "down_proj",
 
37
  "k_proj",
38
- "q_proj"
 
39
  ],
40
- "target_parameters": null,
41
  "task_type": "CAUSAL_LM",
42
- "trainable_token_indices": null,
43
  "use_dora": false,
44
- "use_qalora": false,
45
  "use_rslora": false
46
  }
 
1
  {
 
2
  "alpha_pattern": {},
 
3
  "auto_mapping": null,
4
  "base_model_name_or_path": "meta-llama/Llama-3.1-8B-Instruct",
5
  "bias": "none",
6
  "corda_config": null,
 
7
  "eva_config": null,
8
  "exclude_modules": null,
9
  "fan_in_fan_out": false,
 
20
  "megatron_core": "megatron.core",
21
  "modules_to_save": null,
22
  "peft_type": "LORA",
 
 
23
  "r": 16,
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
 
 
27
  "v_proj",
 
28
  "down_proj",
29
+ "q_proj",
30
  "k_proj",
31
+ "o_projgate_proj",
32
+ "up_proj"
33
  ],
 
34
  "task_type": "CAUSAL_LM",
 
35
  "use_dora": false,
 
36
  "use_rslora": false
37
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:061f4fd4b3ccec5ec43291093160bf3cb29cfe1adbd1382a11b86e7f6b0769d9
3
- size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ded677b016d4f49d413d04251c94abb34f908f41de8ba43f705edf6f32e923bd
3
+ size 113288920
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92a37e28694c92cfe2b1c43ab2c8489887a68cfe39d4821e0b84770744cf9c85
3
- size 6353
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:880161907ec0419b1625b1e1c5007cfaa29e8f5db7c9727fd3b44df9c7c0b518
3
+ size 5688