File size: 834 Bytes
240d754 202a49c 240d754 4aff057 eabcfbe 4aff057 afbe1ca 79e588f be1c8ab 4aff057 be1c8ab 4aff057 be1c8ab 28c8273 3d5f874 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | ---
library_name: peft
---
license: apache-2.0
---
This is a qlora trained llama 7b v2 Adapter trained with 1,250 high quality examples from uncencored WizardOrca dataset+Custom gpt4dataset (examples selected by length).
|4 epochs|
2e-5 learning rate|
1 microbatch size|
128 batch size|
adam8it|2048 tokens|
The model can use the standard llama v2 prompting or use the alpaca chat prompting as this dataset was converted to alpaca format.
Footnotes
---
The model has not lost its ability to interpret 4096 tokens, regardless of this adapter being trained on 2048 tokens.
The model preforms execptionally well based of my preliminary human evaluation.
Benchmarks coming soon.
(trained with oobabooga webui)
https://github.com/oobabooga/text-generation-webui
Main Orginal dataset creator: Psmathur
https://huggingface.co/psmathur |