Fine Tuning on this model

#2
by ilml - opened

First of all, Thank you for awesome work with quantization. This is the best Llama 3.1 8B quantization model found so far including comparison with original model benchmarks.

How can I fine tune using Model? will PEFT work with this ?

Sign up or log in to comment