File size: 300 Bytes
c8fecc7 | 1 2 3 4 5 | This is a fine-tuning of the LLaMa7B model in the style of the Alpaca dataset and setting but using LoRa. For details of the data and hyper params - https://crfm.stanford.edu/2023/03/13/alpaca.html This repo only contains the LoRa weights and not the original LLaMa weights which are research only. |