Americo commited on
Commit
e6204f9
·
1 Parent(s): 4299dfd

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -15
README.md CHANGED
@@ -4,7 +4,6 @@ tags:
4
  model-index:
5
  - name: llama2_finetuned_chatbot
6
  results: []
7
- library_name: peft
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -12,7 +11,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # llama2_finetuned_chatbot
14
 
15
- This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
16
 
17
  ## Model description
18
 
@@ -28,17 +27,6 @@ More information needed
28
 
29
  ## Training procedure
30
 
31
-
32
- The following `bitsandbytes` quantization config was used during training:
33
- - load_in_8bit: False
34
- - load_in_4bit: True
35
- - llm_int8_threshold: 6.0
36
- - llm_int8_skip_modules: None
37
- - llm_int8_enable_fp32_cpu_offload: False
38
- - llm_int8_has_fp16_weight: False
39
- - bnb_4bit_quant_type: nf4
40
- - bnb_4bit_use_double_quant: True
41
- - bnb_4bit_compute_dtype: float16
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
@@ -50,7 +38,7 @@ The following hyperparameters were used during training:
50
  - total_train_batch_size: 32
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - training_steps: 100
54
 
55
  ### Training results
56
 
@@ -58,7 +46,6 @@ The following hyperparameters were used during training:
58
 
59
  ### Framework versions
60
 
61
- - PEFT 0.4.0
62
  - Transformers 4.30.2
63
  - Pytorch 2.1.0+cu121
64
  - Datasets 2.16.1
 
4
  model-index:
5
  - name: llama2_finetuned_chatbot
6
  results: []
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
11
 
12
  # llama2_finetuned_chatbot
13
 
14
+ This model is a fine-tuned version of [NousResearch/llama-2-7b-chat-hf](https://huggingface.co/NousResearch/llama-2-7b-chat-hf) on the None dataset.
15
 
16
  ## Model description
17
 
 
27
 
28
  ## Training procedure
29
 
 
 
 
 
 
 
 
 
 
 
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
 
38
  - total_train_batch_size: 32
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
+ - training_steps: 10
42
 
43
  ### Training results
44
 
 
46
 
47
  ### Framework versions
48
 
 
49
  - Transformers 4.30.2
50
  - Pytorch 2.1.0+cu121
51
  - Datasets 2.16.1