Americo commited on
Commit
4299dfd
·
1 Parent(s): f43fe2f

llama2-7b-farmatodo_finetuned2

Browse files
README.md CHANGED
@@ -4,6 +4,7 @@ tags:
4
  model-index:
5
  - name: llama2_finetuned_chatbot
6
  results: []
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -27,6 +28,17 @@ More information needed
27
 
28
  ## Training procedure
29
 
 
 
 
 
 
 
 
 
 
 
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
@@ -46,6 +58,7 @@ The following hyperparameters were used during training:
46
 
47
  ### Framework versions
48
 
 
49
  - Transformers 4.30.2
50
  - Pytorch 2.1.0+cu121
51
  - Datasets 2.16.1
 
4
  model-index:
5
  - name: llama2_finetuned_chatbot
6
  results: []
7
+ library_name: peft
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
28
 
29
  ## Training procedure
30
 
31
+
32
+ The following `bitsandbytes` quantization config was used during training:
33
+ - load_in_8bit: False
34
+ - load_in_4bit: True
35
+ - llm_int8_threshold: 6.0
36
+ - llm_int8_skip_modules: None
37
+ - llm_int8_enable_fp32_cpu_offload: False
38
+ - llm_int8_has_fp16_weight: False
39
+ - bnb_4bit_quant_type: nf4
40
+ - bnb_4bit_use_double_quant: True
41
+ - bnb_4bit_compute_dtype: float16
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
 
58
 
59
  ### Framework versions
60
 
61
+ - PEFT 0.4.0
62
  - Transformers 4.30.2
63
  - Pytorch 2.1.0+cu121
64
  - Datasets 2.16.1
adapter_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "auto_mapping": null,
3
- "base_model_name_or_path": "meta-llama/Llama-2-7b-hf",
4
  "bias": "none",
5
  "fan_in_fan_out": false,
6
  "inference_mode": true,
 
1
  {
2
  "auto_mapping": null,
3
+ "base_model_name_or_path": "NousResearch/llama-2-7b-chat-hf",
4
  "bias": "none",
5
  "fan_in_fan_out": false,
6
  "inference_mode": true,
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3089e3e3a19d7ea3251b8ba069a490b66f3854fdd2871f62c8d999340f41dd1
3
  size 134264202
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1c0434caf6de77f55ba946a92b50cefe232f30cf82145d3530fb796af2f9610
3
  size 134264202
runs/Jan28_16-06-07_7c56aae4039a/events.out.tfevents.1706458017.7c56aae4039a.4742.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:661ab17249963281323ecbe00afa5831f167c72e6ac7bc815330a352a9b97faf
3
- size 4532
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b8b7e378eb7809db95c2ca97f2d82e218ef9d859c7ea148c331cb2ca99dc08b
3
+ size 4880
tokenizer.json CHANGED
@@ -1,11 +1,6 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
  "padding": null,
10
  "added_tokens": [
11
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {