PEFT
code
instruct
llama2
souvik0306 commited on
Commit
f83c306
·
1 Parent(s): d67c2dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -6,13 +6,13 @@ tags:
6
  - llama2
7
  datasets:
8
  - Zangs3011/no_robots_FalconChatFormated
9
- base_model: llama/Llama-2-7b-hf
10
  license: apache-2.0
11
  ---
12
 
13
  ### Finetuning Overview:
14
 
15
- **Model Used:** llama/Llama-2-7b-hf
16
  **Dataset:** HuggingFaceH4/no_robots
17
 
18
  #### Dataset Insights:
@@ -32,7 +32,7 @@ With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](ht
32
  - **Epochs:** 1
33
  - **Cost Per Epoch:** $1.313
34
  - **Total Finetuning Cost:** $1.313
35
- - **Model Path:** llama/Llama-2-7b-hf
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 99% train 1% validation
38
  - **Gradient Accumulation Steps:** 4
 
6
  - llama2
7
  datasets:
8
  - Zangs3011/no_robots_FalconChatFormated
9
+ base_model: meta-llama/Llama-2-7b-hf
10
  license: apache-2.0
11
  ---
12
 
13
  ### Finetuning Overview:
14
 
15
+ **Model Used:** meta-llama/Llama-2-7b-hf
16
  **Dataset:** HuggingFaceH4/no_robots
17
 
18
  #### Dataset Insights:
 
32
  - **Epochs:** 1
33
  - **Cost Per Epoch:** $1.313
34
  - **Total Finetuning Cost:** $1.313
35
+ - **Model Path:** meta-llama/Llama-2-7b-hf
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 99% train 1% validation
38
  - **Gradient Accumulation Steps:** 4