| | --- |
| | library_name: peft |
| | --- |
| | license: apache-2.0 |
| | --- |
| | This is a qlora trained llama 7b v2 Adapter trained with 1,250 high quality examples from uncencored WizardOrca dataset+Custom gpt4dataset (examples selected by length). |
| |
|
| | |4 epochs| |
| | 2e-5 learning rate| |
| | 1 microbatch size| |
| | 128 batch size| |
| | adam8it|2048 tokens| |
| |
|
| | The model can use the standard llama v2 prompting or use the alpaca chat prompting as this dataset was converted to alpaca format. |
| |
|
| | Footnotes |
| | --- |
| | The model has not lost its ability to interpret 4096 tokens, regardless of this adapter being trained on 2048 tokens. |
| | The model preforms execptionally well based of my preliminary human evaluation. |
| | Benchmarks coming soon. |
| |
|
| |
|
| |
|
| |
|
| | (trained with oobabooga webui) |
| | https://github.com/oobabooga/text-generation-webui |
| | Main Orginal dataset creator: Psmathur |
| | https://huggingface.co/psmathur |