What was the lora_config used to create this fine-tuned version of falcon-7b?
#42
by
ecorro
- opened
I'm trying to fine tune it with another instruct dataset and want to do it using bfp16 instead of 4-bit QLoRA. Can anyone please point me to an appropriate lora_config setup? Particularly the 'target_modules' prameter.
ecorro
changed discussion status to
closed
ecorro
changed discussion status to
open
ecorro
changed discussion status to
closed