| | --- |
| | license: mit |
| | datasets: |
| | - yahma/alpaca-cleaned |
| | --- |
| | This repo contains a low-rank adapter for LLaMA-7b fit on the Cleaned Alpaca dataset (with the new GPT-4 training data). |
| |
|
| | This version of the weights was trained with the following hyperparameters: |
| |
|
| | Cleaned dataset: Snapshot April 8, 2023 |
| | Epochs: 6 (Checkpoint with lowest eval loss at 3.6 epochs uploaded here) |
| | Validation set size: 1500 |
| | Batch size: 128 |
| | Micro batch size: 8 |
| | Cutoff length: 512 |
| | Learning rate: 3e-4 |
| | Lora r: 16 |
| | Lora target modules: q_proj, k_proj, v_proj, o_proj |
| | |
| | That is: |
| |
|
| | python finetune.py \ |
| | --base_model='yahma/llama-7b-hf' \ |
| | --data_path 'yahma/alpaca-cleaned' \ |
| | --num_epochs=6 \ |
| | --cutoff_len=512 \ |
| | --output_dir='./lora-alpaca' \ |
| | --lora_target_modules='[q_proj,k_proj, v_proj, o_proj]' \ |
| | --lora_r=16 \ |
| | --val_set_size 1500 \ |
| | --micro_batch_size=8 |