|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
--- |
|
|
This is my first attempt at a qLoRA on Mixtral using the same raw text dataset as the Dendrite model on Mixtral 8x7B Instruct v0.1 with the following notable settings: |
|
|
``` |
|
|
lora_r 256 |
|
|
lora_alpha 256 |
|
|
..._max_len 256 |
|
|
learning_rate 0.000001 |
|
|
num_train_epochs 2 |
|
|
``` |
|
|
|
|
|
The output is a little less dry but it still maintains the full level of functionality you would expect out of Mixtral instruct. And it still responds to the |
|
|
|
|
|
``` |
|
|
[INST] |
|
|
Do a thing |
|
|
[/INST] |
|
|
``` |
|
|
|
|
|
format. |
|
|
It's not a massive change in the output but I do plan to run a similar but larger data set with more epochs and a higher learning rate. |