Commit
·
184c7b6
1
Parent(s):
084d99e
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,13 @@ license: llama2
|
|
| 7 |
|
| 8 |
CyberBase 13b 8k *base model* - (llama-2-13b - lmsys/vicuna-13b-v1.5-16k)
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
# Base cybersecurity model for future fine-tuning, it is not recomended to use on it's own.
|
| 11 |
- **CyberBase** is a [lmsys/vicuna-13b-v1.5-16k](https://huggingface.co/lmsys/vicuna-13b-v1.5-16k) QLORA fine-tuned on [CyberNative/github_cybersecurity_READMEs](https://huggingface.co/datasets/CyberNative/github_cybersecurity_READMEs) with a single 3090.
|
| 12 |
- It might, therefore, inherit [promp template of FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#prompt-template)
|
|
|
|
| 7 |
|
| 8 |
CyberBase 13b 8k *base model* - (llama-2-13b - lmsys/vicuna-13b-v1.5-16k)
|
| 9 |
|
| 10 |
+
## Test run 1 (less context, more trainable params):
|
| 11 |
+
- sequence_len: 4096
|
| 12 |
+
- max_packed_sequence_len: 4096
|
| 13 |
+
- lora_r: 256
|
| 14 |
+
- lora_alpha: 256
|
| 15 |
+
- trainable params: 1,001,390,080 || all params: 14,017,264,640 || trainable%: 7.143976415643959
|
| 16 |
+
|
| 17 |
# Base cybersecurity model for future fine-tuning, it is not recomended to use on it's own.
|
| 18 |
- **CyberBase** is a [lmsys/vicuna-13b-v1.5-16k](https://huggingface.co/lmsys/vicuna-13b-v1.5-16k) QLORA fine-tuned on [CyberNative/github_cybersecurity_READMEs](https://huggingface.co/datasets/CyberNative/github_cybersecurity_READMEs) with a single 3090.
|
| 19 |
- It might, therefore, inherit [promp template of FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#prompt-template)
|