| Replicating Baize Finetuning via LORA (https://github.com/project-baize/baize) | |
| 1 epoch on 7B | |
| Note: These are just the adapter models (please bring your own Llama model) |
| Replicating Baize Finetuning via LORA (https://github.com/project-baize/baize) | |
| 1 epoch on 7B | |
| Note: These are just the adapter models (please bring your own Llama model) |