text
stringlengths
1.71k
2.02k
toptentips-text_0.wav|I'm going to walk you through 10 quick tips for fine tuning. For each of those, I'll point you to one or two trellis videos on YouTube and also point you to the right branch if you're working out of the trellis advanced fine tuning repository. Tip number one is to start with a small model. I recom...
toptentips-text_1.wav|So you want to create 10 question answer pairs and use those to choose which base model is going to perform best. So just by running those on different base models, you can see which one is going to give you the best baseline for starting off your fine tuning. Then after you do any training run, y...
toptentips-text_2.wav|If you do want to automate a little more how you generate synthetic data sets, you can check out this video here on data set preparation with LLMs. Tip number six is always use a validation data set. If you don't have one, you can just split off 10 to 20% of your training data set. You want to be ...
toptentips-text_3.wav|Something I should mention as well is that you can waste a lot of time with installations and getting stuck in getting set up for fine tuning. One way around that is to use an image or a template that sets up your CUDA and PyTorch to a specific version. I've got a one-click template here for RunPo...
toptentips-text_4.wav|There are two videos of relevance here. If you want to understand the pros and cons of full fine-tuning versus QLORA or LoRa, take a look at this video. And if you want to understand the complexities of doing multi-GPU training, check out multi-GPU fine-tuning. Moving to two last tips, tip number ...
toptentips-text_5.wav|Here I recommend the Orpo YouTube video, and there's also a branch by that name in Advanced Fine Tuning. Orpo is also supported in the Unslot branch, where there's a Python Jupyter notebook and also just a Python.py script you can run. And Orpo is supported as an option in the Multi-GPU branch too...
toptentips-text_6.wav|And last of all, you can consider preference fine-tuning, in which case I'd recommend using ORPL, which will do supervised fine-tuning and odds ratio preference optimization. at the same time. Now, this approach here I've talked about for language models, but it also works for video and speech or ...
toptentips-text_7.wav|And there are also scripts on function calling inference and speed test too. I'll talk a little more about those just at the end of this video. Last of all, these repos, of which there are four, they're available for purchase individually, but you can also now buy a repo bundle, which will give yo...
toptentips-text_8.wav|Now I'll briefly show you each of those two main branches. So here we'll go into the unsloth branch. The way that you run training in this Unslot branch is by setting up the configuration in a config file. I've also got a config file that you can use here if you want to do some function calling fi...
toptentips-text_9.wav|Here I've selected a main branch for training, and I've selected the training split. You can fix a max number of rows here. This will save you time if you just want to download and run on, say, 100 rows instead of on a massive dataset. Now, I spoke earlier about generating a validation set. You ca...
toptentips-text_10.wav|And then I think this is maybe a chat ML format. the answer will start after I am start assistant. So this allows the training loop to check within your prompt. It will check for where this start of the assistance answer is, and then it will only look at the loss after that point. After this, the...