rquest details about fintuning

#1
by FelixdoingAI - opened

Hello. I am really interested in your finetuned tinyllama and it is great.

Now, I am fine-tuning Tinyllama on another dataset.
Could you please share your fintuning details?

Yes, the model was first converted to the MLX format to enable training on Mac. I then fine-tuned it using the Alpaca dataset focused on physics. To optimize the training process, I applied PEFT (Parameter-Efficient Fine-Tuning) by adding adapters to the base model and trained only those adapters.

The results were promising in terms of structured and well-formatted outputs. However, the model still struggled with delivering accurate information, particularly in domain-specific details.

Hope this helps!

Thank you for your reply. And what does the "alpca dataset focused on physics" mean?

And how many epoches do you use?

Sign up or log in to comment