| {"url": "https://www.youtube.com/watch?v=EeZIKQmWSXg", "transcription": " Hey, whiz. Hey Wiz, so if I'm a super beginner trying to get into fine-tuning, should I use Hugging Face and Peth Library or should I maybe pick up Mistral Fine-Tune instead? Hugging Face is probably great, yeah. So is it like a fundamentally different method that is being used for fine tuning between like a peft laura and the approach we'll see today and mr fine tune no no it's the same same thing under the hood yeah same same okay okay so is it a quote lightweight code base that enables quote memory efficient and performant fine tuning on mistral models at least yes absolutely it's that yes is hugging face also a lightweight code base that enables memory efficient and performant fine tuning on mr the light the lightweight we can quibble about for sure okay but the But the rest of it, absolutely yes. Okay, okay, okay. But it does the thing. It did the fine tuning, right? It did, yes. Okay, okay. So we're going to sort of try to assess today if this thing provided a, quote, simple guided entry point to fine tune mistral models. And, of course, we can quibble about simple and guided, but it did the thing today, right? It did the thing. So, you know, it does the thing that it says on the 10 and here we are folks, another day, another tool. Welcome to the open source LLM edge, everybody. We're going to dive in and get to the bottom of the concepts and code behind Mistral FineTune. I'm Dr. Greg, that's the whiz, and we are co-founders of AI Makerspace. We're excited to dive into this new tool, and by the end of today, you'll sort of recall what powers and underlies fine-tuning throughout the industry, not just open source tools, but even a lot of the closed source tools that you might have your hands on today. Of course, if you have questions along the way, please use the Slido. We will get to questions probably throughout this event. This is going to be kind of a discussion heavy one. So keep the questions coming in the Slido. And also if you've got questions that are super relevant to the discussion we're having at the moment, YouTube live. All right, everybody, let's get into it today. We're going to go ahead and kick off fine tuning. Mistral 7B with Mistral Fine Tune. And aligning ourselves to today, we want to really make sure that we understand the legend, Laura, that's at the core of all of the fine-tuning that we see. We want to understand how to use Mistral FineTune. We're going to show you how to do that. We're going to do some instruct tuning with it. And we want to compare and contrast what we saw with this new library to what we're comfortable with, what we're used to with Hugging Face's parameter efficient fine tuning library and methods like LoRa and QLoRa. So we'll start with a review and then we'll dive into what we're seeing from Mistral Fine Tune, talk about Hugging Face versus Mistral FineTune. Do some fine-tuning and we'll again discuss throughout. So everybody, Laura, let's review here. First off, fine-tuning. What are we talking about? Well, we're talking about modifying modifying the behavior of an LLM by updating the weights of the neural network, the weights of the transformer. And full fine-tuning, it means updating all of the weights. But full fine-tuning because these things are so large is often quite infeasible for the average Joe for the GPU poor out there like we are and like we know many of you are and so we need a better way and the better way that the industry has really adopted is low-rank adaptation. And this is now not full fine-tuning, but rather fine-tuning only part of the neural network, part of the transformer, and using a factorized matrix approach to do so. Let's recall back to the OG paper here. October 2021 light years ago, quote from the abstract, as we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Absolutely classic. Hence, we propose LoRa, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the transformer architecture, meaning each attention layer within each decoder block, thus greatly reducing the number of trainable parameters for downstream tasks. Okay, hold on, say what? Freezes the pre-trained model weights and injects trainable rank decomposition matrices. Hold that thought. We're going to do some stacking and then we'll discuss. Mistral FineTune, just released, says, Mistral FineTune is a lightweight code base, memory efficient, performant fine tuning. It is based on LoRa, a training paradigm where, quote, most weights are frozen and only 1% to 2% additional weights in the form of low-rank matrix perturbations are trained. Low-rank matrix perturbations. Okay, so we've got training paradigm, 1% to 2% additional weights in the form of low rank matrix perturbations. That's how Mistral is talking about it today in May, 2024. And the guys from Microsoft that wrote the Laura paper talking about it in 2021 said freezes the pre-trained model weights and injects trainable rank decomposition matrices. Okay, so let's sort of cover a little bit of terminology before our discussion here. One of the things that really inspired the authors of the LoRa paper was a paper written in December of 2020 called Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. And so here we can see a quote from this paper. Common pre-trained models have a low intrinsic dimension. So there exists here a re-parameterization that is effective for full fine-tuning and can be as effective as the full parameter space. So this whole idea of this re-parameterization, that is where LoRa comes in. And of course, we're using a low rank adaptation approach. So it's important to understand the idea of matrix rank. The easy way to sort of understand this is to think about a very simple problem where we have a bunch of columns in our data set, and we're thinking about having a number of linearly independent columns. This idea is very common for anybody that studied matrix algebra. And so we can kind of think of how many features, how many columns are actually giving new information in sort of a broad way. We can sort of contextualize this idea of rank. How much of the information is actually important to, let's say, pay attention. And when we think about another classic matrix algebra principle technique that's used here, it's just decomposition. So we're decomposing a problem into its constituent parts, thereby breaking a difficult computation into simpler tasks. So all of this taken together from the idea of an intrinsic dimension, to the idea of low rank, to the idea of matrix decomposition, to the idea of trainable injected decomposition matrices to low rank matrix perturbations. We're going to sort of wrap all this together in a discussion now. I'd like to invite the Wiz back up to talk through this. Wiz, I think you did a pretty cool video on this idea of Laura quite some time ago. And I wonder if you can kind of just give us your overview, your thoughts, like with the diagram that gets shown in everybody's presentation it's the one on the first page of the paper as you look at this maybe you can walk us through what you see and you know maybe define some of these letters along the way for us yeah so i mean basically this is just a very fancy way to say that you you know, as we train our model, right, we can represent, so think of it this way. Our model has two, two real quote unquote real components, right? One is this idea of a, uh, you know, base weight matrix. And the other is this idea of a, you know, update weight matrix. Now, typically these are not like, we don't need to pull these apart. And in fact, we wouldn't because it adds a lot of, you know, overhead where we have to add them back together and everything like this. But the idea is that because we can represent our weight updates as a separate update matrix, right? And then we can lock in those base pre-trained weights, we can then take that delta matrix, right? And represent it in this low-rank, you know, product matrix form. We have these two matrices that will give us our actual, you know, weight update matrix. So the key insight here is that the base model weights are different than the final fine-tuned weights, and that difference is some delta weight, right? And we can represent that delta weight with this low-rank form. And the idea is we're going to pay computational overhead for this because we have to keep factoring together these matrices and then adding them back. But it's worthwhile to spend that little bit of extra compute in order to save a massive amount of required GPU memory. So while the training is the fine tuning is is is slower, we're adding latency to our training. Right. We massively reduce the amount of actual parameters that we need to hold in memory, which means we can train these models on much smaller than previously used, you know, hardware. And that's the crux of it, right? By training A and B and then factoring them together and adding them back to that base weight matrix, what we're really doing is we're figuring out what's the best, you know, form for A and B that results in weight updates that make us good at our task. So there you go. Okay. Okay. So it's really important to understand then that this is actually only important during training. Is that right? Where we're sort of actively updating the weights. So that's a great thing that you've mentioned. So no, well, kind of. So the fact that we can as a low-rank form means that they are very lightweight, and it means that, you know, well, if we can add these quickly to our base weights, you know, then, well, at inference time, actually, we can just add whatever one we want. So say we had Mistral, for example, and we fine-tuned it to do summarization, right? Well, we'd have like an adapter, right, which is just the LoRa weights that we could apply to that base model to make it good at summarization. But let's say we also fine-tuned it on a math task or a, you know, translation task. Well, at time of inference, we can choose which adapter to use. So it is very important for training, but we can also leverage it in order to make inference more quote unquote powerful. Okay. Okay. Yeah. So we can swap out these low rank adapters at inference, but what we're doing during training is we're essentially like like plugging in an empty adapter and sort of uh training it we're calibrating it to the sort of uh thing we want it to be able to do right i mean this is ultimately when we're done training do we have then an adapter or do we have like a model that's fine-tuned? So because we kept our base model frozen, we never actually touched it, right? We still have the base model. It still exists, but we also have this artifact that we've created that we commonly refer to as an adapter, right? Which is just the LoRa weights. And now as long as that base model doesn't change, we can carry those adapters around and then use them like a bit in a drill, right? Whenever we have that base model, we can use those adapters. So it's important to understand in that, exactly, as long as the drill is the same or the base model is the same, we can use that bit or adapter anywhere. We don't have to save the base model every time. We can keep downloading it, we can download it when we need it, yada, yada. We can move this all around. It's fine. But the base model has to be literally exactly the same, right? Or else the bit won't fit. Ah, yes, yes. Okay. It's got to be the same drill, right? Yes. Yes. Okay. It's gotta be the same drill, right? Yes. Exactly. Yes. Yes. Yes. Okay. So, or at least like the same little claw on the end of the drill. So, okay. So then there's this difference in language between if you read the paper and if you read the Mistral FineTune thing. Can you talk a little bit about this trainable rank decomposition matrix versus matrix perturbations idea why are we getting this sort of um differentiation in language now is where's the perturbation idea coming from exactly it's just a difference in language i mean it's the same it means the same thing so it's not something separate. When we're training a weight matrix, right, we are perturbing the weight matrix. So when we, when we update our weights, we are wiggling it about, right? Or, you know, a fancier way to say wiggling it about, of course, is just to perturb. Perturb, yes. per turn per turn yes but there's no difference the language is just uh fancier you know it's it's it's got more college words in it so it's talking about that delta w that little change in weights that we're then sort of decomposing into a times b matrix here and so um so then just you know as we sort of think about the other couple letters on this chart here. Okay. Yeah. So I've got X by D. And can you tell me what these dimensions are and why they're relevant here? And then H as well. Yeah. So X is just the, basically when we're talking about, so, okay. The idea is that we have some base matrix and that base matrix is some you know d by d matrix our initial input is x and then our changed input is h right so all that this is saying is we have some d by d matrix which is represented on the left by that big blue square we turn that into a d by r r by d matrix set and then we concatenate those resultant matrices so we do we we get the product matrix of of a and b and then we concatenate it with or we just you know plus it to uh our pre-trained weights which are in form d by d and of course thanks to the way that the matrix math works out r by d times d by r is going to result in a matrix of size d by d so their sizes all match up x is our input and then h is going to be our output from this process X is our input, and then H is going to be our output from this process. All right, all right. So X is basically heading into the transformer. That's what sort of X represents, this X by D. It's sort of the embedded and positionally encoded information, and it's flowing through the block. And then H is sort of the output of, is this the attention layer or is this the output of the entire block here? So this is actually pretty interesting. So we can use LoRa where, where so ever there is a matrix. It doesn't have to be just the attention mechanism. It can be in the MLPs. It can be anywhere that there's a big matrix that we don't want to be big, but instead wants to be small. So, you know, in the initial Laura paper, we did see that we only applied it to specific subsets of the weights in the model. Specifically, we applied it to QV, I believe, if I'm remembering correctly, but we applied to it only some layers. Now, we're very judicious. We apply it to basically everything, right? Where we're going to apply it to the MLPs, which is the feed forward network. We're going to apply it everywhere we can. Right. In fact, with things like QLora we found, that's actually even better. It results in better models at the end of the day. But the idea is this is, Lora is not a process that's unique to attention, that's unique to, you know, specific layers in the transformer architecture. you know, specific layers in the transformer architecture. Now it is it's useful because the transformer architecture is specifically large language models are so huge and they have this property of intrinsic load dimension. So we can't just use this in like any old matrix. But for transformer matrices, yeah, we can just we apply it pretty judiciously. We just slap it in it pretty judiciously. We just slap it in there. Okay. Okay. And, and I mean, let's go back to like the whole name, right? And we say lower, lower, lower, but it's low rank adaptation. So it really is just sort of technique that can kind of now even be applied much more broadly than we thought in the initial paper. Is that right? I would say probably the application space is the same. Large language models is where we're going to see this the most and in other kind of like larger uh larger models where we've trained things so much that they have this property uh you know the matrices are so huge the data is so plentiful but uh yes it is it's it's a specific the way we apply it has evolved or what we apply it to within that system has evolved even if the actual you know crux of the application is the same which is that's useful for lms it's not very useful like you know for your for your smaller networks or for like uh you know uh things like really small bert you know we're not gonna be thinking about this too much okay okay okay yeah because it's all about reducing the number of trainable parameters and like if we've got a consumer grade gpu and we can do like a relatively complete fine tuning on a pretty small model we'll just do that we don't need laura right it's it's really all about making sure that it aligns to the gpu power of the consumer today, for the GPU poor of us out there, right? All right. Sounds good. Thanks, Wiz. We'll come back to you to show us how to do Mistral FineTune. And speaking of Mistral FineTune, let's take a look a little bit closer at the specific library here. So what we can see with Mistral FineTune is it is this lightweight code base based on lower end, blah, blah, blah. Now for maximum efficiency, it's recommended that you use a big daddy GPU. The code base is optimized for these kinds of things, but for smaller models, we can use a single GPU. And then that's kind of the way we're gonna show the fine tuning today now they did provide a note here on the repo that the goal is to provide a quote simple guided entry point to fine-tune Mistral models this is what we're trying to test out today we'll see what you guys think as well did they achieve achieve their goal yet, or is there still work to do with Mistral FineTune? So they walk us through a number of methods that they can use for fine-tuning, a number of types of fine-tuning, specifically at first in the repo. They say, well, you can do pre-training that has sort of continued pre-training. You can do instruction tuning and you can do function calling. Now, these are all fine tuning. OK, so pre-training is fine tuning, continued pre-training. Instruction tuning is fine tuning. Tuning for function calling is fine tuning. And again, they're all using the LoRa approach under the hood. Now to sort of get this done, it's very simple order of operations, similar to what you would see in any other fine tuning library, prepare the data set, verify it, start the training, make sure your config is right, and then test it out by doing inference. Now they they did also sort of note, hey, you can easily plug in 1B to this. And we went ahead and did that today because, you know, why not? Let's try out all the features and goodies here. When we looked at 1B, we were specifically looking at training loss, evaluation loss, and evaluation perplexity. Although there's a number of other things that Wiz will show you is kind of available if you're linked up to 1B to look inside the black box as you're training. Okay. Now, when we think about loss, remember, remember everybody, like, you know, how do we calculate loss? Well, we're going to use cross entropy. Now to go much deeper on cross entropy, you know, join us again next week when we're talking logits and loss. We're going to go back down deep into the transformer and we're going to talk about how to exactly do these sort of weight updates during training associated with the loss function now the other thing that Mistral FineTune allows you to do and this is sort of an open question is this super valuable or not is it allows you to leverage their mixtural models the mixture mixture of experts models. And this is directly from the repo here. A best practice for mixtural models is that they're like, man, you really should train mixtural models a couple of times independently, because depending on the seed that you use during training you can actually get a really really high degree of variance between instantiations of fine-tuning of mixtural models and I've got a quick discussion point here that I want to bring whiz back up for just in terms of the mixtural Is there a reason why we're not fine tuning mixed role today was it seems like it's cooler. It's newer. Is it like harder or something to do this? What's the deal? It's not harder. It's just it I mean, in fact, it's the same. It's just, you know, it's just fine tuning. Nothing, nothing changes. But the mixed role models do, you know, they have a GPU requirement that exceeds the possibilities of the CoLab environment. So, you know, remember, Mixtral doesn't require a ton of active weights for inference, but it does require a lot of weights to be loaded in GPU memory, right? So even though when we're doing inference, we're not touching all those weights, we need to be able to in order to have all of the correct paths available to us through that model, which requires us to have a larger GPU memory capacity, even if we're not going to be using as many as we're doing inference. The inference is still zippy, still fast, but we have to have that capacity to hold the big model and all those available paths in it. That's right. That's right. And as we said before, you can use LoRa on not just the attention layer, but you can also use LoRa on, like you mentioned, the feed forward layers. And for everybody sort of trying to recall exactly what Mistral kind of looks like and how it's sort of different, you know, from an architectural perspective, that feed forward network layer is replaced with the sparse mixture of experts layer, right? So you're saying you kind of have to hold each of these kind of mini neural networks here, feed forward network one, two, three, et cetera, et cetera. You got to hold all of this in memory even if you use injectable trainable low rank decomposition matrices you still have to hold all this there and and that makes it more computationally intensive and remember we not only have to have those low rank decomposed matrices we also need to have those those base matrices those big honking uh frozen weights which are going to take up all of our capacity right so it's a the the adapters take up very little space thankfully but we gotta load all of this into memory so that every path is available right like it's like if we imagine that each of these, you know, feed forwards is the equivalent of like a door, right? We have to have all the doors available to us, even if we're not going to go through all of them every time, because we might need to get to a different room the next time we go through, right? So we have to have them all there, even though we're not going to use them every time we do we do any kind of uh forward pass um okay yeah yeah yeah makes a lot of sense okay so literally like the more experts you have the more compute you you're just you're forced to use even if you're fine-tuning even with laura even if you're forced to use even if you're fine tuning even with laura even if you're quantizing it just scales out with the number of experts that's right okay all right very cool all right then uh we're gonna set this up so we're get just about ready to rock and roll into the demo today guys instruction tuning with mistral 7b is going to be based on first of all some instruction tuning data that we've grabbed off of the shelf and we're just going to use the dolly 15k data set so this is available directly on hugging face this is sort of a classic data set that's got a lot of different categories of instructions closed question answer, classification, open QA, information extraction, et cetera, et cetera. And so it's sort of a broad perspective view. Now, we're not going to use all 15,000 data points for fine-tuning, and we're just going to do a few hundred iterations. But this will give us a feel for what the difference is between the model that we use, the base model, and how well it does with our instructions after we fine tune it. Now, we're going to use Mistral 7B Base V3. The only difference between V2 and V3 is like so many models today, that sweet, sweet long context window. So it's up to 32K, 32, 768 to be exact. And that's the real distinction from the V2. So with that, I'm going to pass it off to the Wiz to show us how to go through Mistral Fine Tune to do some instruction tuning on Mistral 7B. Take it away, man. Yes. Okay. So this is pretty straightforward. Thanks to this library. However, it does require, you know, we'll talk about it. So first thing we got to do is grab some dependencies, pretty standard stuff. So, we're going to go ahead and we're going to grab our menstrual fine tune, which is the repository, which can be found here. The repository has great instructions. It has a tutorial that doesn't work currently, though I'm sure they'll update it. And the basic idea here is pretty straightforward, right? We need to get the model, do some stuff. We're going to walk through the notebook. So we'll get the repository, we'll cd into it, and we'll install all the requirements that we need. Easy peasy. You can ignore these dependency conflicts in the Colab environment, not worried about it. Then we need to download the model. We're going to download the Mistral 7B v0.3. As Greg said, this is a long context model. However, you know, keep in mind that because we're doing this in a co-lab environment, we're not going to be taking advantage of the long context. You know, it's just not possible to do in the Colab environment, so we're not going to do it. If you're using the recommended equipment, which is a, you know, a node of GPUs, you're going to be fine. But the idea is that we're going to use this 7B v0.3, still a great model, we love to see it. And then we're going to extract that model into a Mistral models folder. Easy. Now, the next step that we have to think about is we have to think about formatting our data set into the correct format. We're going to do instruction tuning. So we're going to follow the instruction tuning guidelines that they give us in their repository. As you can see, the notebook kind of just, you know, this is a JSONL file that we need with, you know, this key messages, which has a list of messages. The messages need to have a role in content, right? And this is very typical if you've seen fine tuning before where we have this role in system, we have this content in the system prompt, and then we have our role user with their content user prompt, and then our role assistant with the content response. And that's it, right? This is a pretty classic example of fine-tuning. And we, you know, it's easy enough to create this JSONL file. You do have to make sure that your data is in this specific format. So it is important that you've contorted things into this format, or else you will not find success, unfortunately. Now, we're going to be using some data from the limit, less is more for instruction tuning. We're specifically going to be using Instruct V1, aka Dolly HHRLHF. And this is the data set that we're going to be using today. It's a fairly standard data set, pretty classic, right? From back in the day, it feels like. The idea is we have some instructions, we have some responses, and we're going to train the model to get good at following that instruction task. And that's the idea. Okay, so in order to do this, we're gonna first just create a data directory to shove all our data into. We're gonna cheat a little bit here. We're gonna use Huggy Face Hub. Instead of just pandas, Huggy Face Hub is just easy, easy to use, right? The dataset format is familiar and great. We're gonna go ahead and use our notebook login because if you're using this dataset, it might require accepting a EULA. And in order to make sure we've done that, we'll need to prove that we are who we say we are on Hugging Face. Then we're gonna load our dataset, which is from Mosaic ML, Dolly HHRLHF. It's a very fun time. The, you know, the best part of this, uh, you know, Dolly H H R L H F, uh, data set is that it's simple, straightforward. So it's easy to contort it into what we need it to be. As you can see, it's not in a, uh, format that, uh, you know, uh, Mistral expects currently. It's in fact's in fact definitely not in that format right so we have to contort it we're going to create a simple formatting function that does that we're going to create the expected format in a column called data where we have our messages which is a list of messages that contain key role with the role and key content with the content. And away we go, easy peasy. We're just going to make sure that our formatting function works. We're going to do that by testing it on one of the samples. And we're going to go to our messages. We have system, blows instruction, designs, perfect. And then our user, what is Kangen water? And then we have this explanation. Very cool, very cool very cool okay so we map our mistral fine-tune format function over the entire data set training and test we can see now that we have this data response with about 60,000 prompts and then we have our test set with about 5k prompts nice and good we're going to save those as json l files since that's what the menstrual fine tune library currently expects and we can just write these out we're going to dump the data into that json l file and separate them with new lines that's where the json l comes from right json lines so every line is a new JSON object. And we can do that for our test as well. We're going to call our test eval because we're not actually going to do testing with it. We're going to evaluate during training with it, which is always fun. But it's just called test by default in datasets. So we're going to leave it there. Now we need to verify the dataset. And we're going to leave it there. Now we need to verify the dataset and we're going to enter into what I believe is the current kind of, I guess it would be, I would call it a shortfall of this particular library in its current form, right? So we're going to run these reformat datas. First of they, they error silently for the most part. So if you're not, your data is not in the correct format, they might just not say anything. Uh, if your data is in a recognizable format that doesn't work, then they will complain, which is what we want. That's ideal. Um, and they do try to reformat, but as they call it in the repo, right? If you have some exotic data, this isn't going to do it, right? You need to kind of do the work to get the format into the shape that is desired by the library. This is not new or specific. You know, it's not specific to Mistral FineTune. Now, the next piece is our training. It's our YAML file. So instead of using kind of, you know, like those long args lists or, you know, a bunch of parameters, we're going to use this idea of a YAML file. And the YAML file is going to dictate everything. The YAML file is going to dictate everything. So first of all, if we look at their repository and we look at all the different cool hyperparameters we have, sorry for all of the training, but we have a bunch of cool hyperparameters, right? We've got all kinds of stuff. Checkpoint frequency, we've got log frequency, we've got rank, we got it all, right? We're going to express all of this in this.yaml. at all right um we're going to express all this in this dot yaml now it it's not necessarily like the best uh thing in the world but it works and that's what we want so first of all we need to set up the data part of our yaml file which we're just going to pass in our data a header and then we're going to give a instruct data an eval instruct data tag that we pass our uh you know the paths to our training and eval data easy peasy then we go to our model path for our model id or path which is just going to point to the downloaded model that we created then we're going to create some hyper parameters classic hyper parameters we've got lower rank, sequence length, batch size, micro batches, max steps, learning rate, weight. It's got it all, right? Well, it doesn't have it all, but it has a lot of things. And this is one of the limitations of this particular strategy. It doesn't have it all, right? If we look at the actual kind of options that we currently have available to us, it's not everything that we're used to if we're coming from another library. However, it does make sense, and it works, and that's great. Now, you'll notice that the sequence length being used here is 4K. This is because we have a limited amount of GPU memory. We want to keep it relatively low. So where we might be able to get away with something higher in the 7 to 8k range, we're just going to keep it at 4k to make sure that we're not blowing through our memory. Our LoRa rank is going to be 64, you know, dealer's choice. We just can't make it too high or else we'll run out of memory and of course we're only going to do this for 300 steps so we're not going to fully train on our data set that would take a very long time we're going to start a learning rate rather rather high and then we're going to decay it you know at a pretty pretty standard rate i think from the chinchilla paper and then we'll put our output directory to this content slash limit test. And then we just have to convert this to YAML format. So we do that here. You'll also notice we have some other parameters that we can set like seed, how often do we log? How often do we eval? You know, are we going to do eval or not? How often should we save a checkpoint? And then save adapters. So remember that because we're doing this adapter fine-tuning, we need to be able to save those adapters periodically, right? So we're not actually training the model. It's very, it's silly to say because we're definitely training the model, right? But we're actually training these adapters, and the adapters modify the model. And so this is the idea, right? We want to save those adapters, or those two broken out matrices, we want to save those as we're going through training, right? And then our run directory is just going to be where we save this run. We're also going to integrate weights and biases, like Greg said. Weights and biases is an easy integration, which is we just have to, you know, provide these options. Our mistral fine tune is what we're going to call the project. The run name is going to be dolly instruct. We're going to provide our API key, and then we're going to write these things to our YAML. We're going to use offline equal to false. You know, there you go. Then we're going to save all of that out to a YAML file. And we can then use that YAML file to validate our data. And what happens here is that there's a script that validates all of our data. Data is correctly formatted. Stats for the data. We get all this cool stuff. It also gives us, in a very fun way, an ETA. So how long this might take, right? Which is pretty cool. And you love that. So we validate the test. We see that everything's no errors. We get no errors twice in a row. No errors twice in a row probably means that there's no errors, which is always ideal. So now that we've done this, we can go ahead and start our training. Training is very straightforward. We just need to make sure because we're in Colab that we provide these additional environment variables so that we target the GPU in our Colab environment. And then we're going to make sure there's nothing in the test folder. And then we're going to run our torch run with our training script from mistral fine tune and then we're going to go ahead and point to that same yaml that we just created above that we use to validate our data so that's great we love to see that i see a question in the chat what do you number of steps mean in this context that's just the number of iterations that we're going to run through so uh you know when it says our our file here right we're doing sequence like uh with our batch size and number of micro batches so a number of steps is going to be the number of times that we repeat uh an iteration on a batch which contains eight micro batches so that's the idea you can see that it's currently training now. We train it beforehand and then we're doing another one just to show off the wand B integration. Very cool. So let's look at wand B. Wand B, as you can see, that's from the completed run. This is from the run that's currently ongoing. So you can see that we have a bunch of different interesting things being tracked. And if we look at something like our training loss, we can see that we have this slowly declining training loss, but it's very noisy. Our learning rate is being decayed, as we would expect. And then we actually just finished a eval in, we'll do many more of. So how will this look at the end? Well, this is an example of the completed run where we have all 300 of our steps. You can see that our perplexity goes down, our training loss, our evaluated training loss goes down, and our eval loss goes down. This is the expectation, of course. As we train, loss go down, a very classic example, right? This is the idea with the 1B integration. This is all just done for us. We don't got to do anything. You love that. So now that we're done training the model, what we have to do? Well, we've got to go ahead and use the model, right? So we're going to go ahead and use menstrual inference to do this. Menstrual inference is menstrual's version of, you know, how to do inference with the Mistral models, unsurprisingly. We're going to go ahead and we're going to load our tokenizer from the downloaded model. We're going to load our model from the downloaded model. And remember, right, the model is the same. We just need those adapters. So then we load our LoRa adapters from our training directory and then we can send it a request very similar to how we would open AI very very convenient and then we're going to tokenize our request generate request request and print some results you can see that our results are very straightforward machine learning is a subset of artificial intelligence allows computers to learn from data without being especially programmed i mean it's great right it does the thing it follows the instruction the instruction was to explain machine learning to me in a nutshell so it did great uh and that is mistral fine tune a library that helps us fine tune mistral models uh don't forget to like comment subscribe hit subscribe, hit the bell. It helps us out. We're here every Wednesday, having a great time talking about cool stuff. So thanks. I'll pass you guys back to Greg. Thanks, Wiz. So that's Mr. Fine Tune. And the last thing that we'd like to sort of point out is, so, you know, how are these two things different? We'll kind of kick off our discussion here. Let's remind ourselves that full fine-tuning, the sort of problem with it, is that it's really not cool if you're GPU poor. And so the Hugging Face libraries use these parameter efficient fine-tuning methods that are just simply better than full fine-tuning. Same problem, right, that it's trying to solve. The number one PEFT method is LoRa. That's the one you should know. And if you're a beginner, as we mentioned in the beginning, you should probably still start there. But Mistral FineTune does do the thing. Their CDN, their content delivery network, is rather slow. It took nearly an hour, maybe 45 minutes to download the model. Their opinionated data formatting is going to give you some potential issues if you have complex data formatting and remember mixtral is simply a more computer intensive thing to deal with not to mention you need to do multiple runs because of the nature of the way the mixtral models work aligning with their best practices in the repo and And then LoRa just sits at the bottom of everything. You can do it on attention layers. You can do it on multi-layer perceptrons, feedforward networks. You can do it at inference. You can plug in the adapter. You can plug in the empty adapter and calibrate it during fine tuning. And so make sure that you do have a handle on the concepts beneath the code. And that is Laura. To kick off QA, I'd like to go ahead and invite Wiz back up to the stage. One more little discussion point. So as we think about Hugging Face versus Mistral Fine Tune, what jumps out to you as similarities and differences that people should keep in mind? Yeah, I mean, they're both used for fine-tuning models. They both will fine-tune models, so you can fine-tune models with both. You'll love to see that. Otherwise, I mean, the differences are quite superficial. It's doing the same stuff under the head. superficial is doing the same stuff under the head. Transformers had a long time, right? To, to, to polish this out, to build things that, that work exactly the way you expect and have all of the bells and whistles we've come to love about that, that kind of a library, right? And, and Mistral is just getting started. So I imagine that over time, you know, this Mistral fine tune will evolve into a solution that makes a lot of sense and is quite useful. For the time being, I think, you know, they're on the path. It's a good first couple steps in that direction, but the ease of use is just not there yet, in my opinion. Okay. All right. Yeah, it takes a long time to create really, really clean, user-friendly products. And, you know, Mistral's putting out a bunch of stuff these days. Look forward to seeing what they continue to put out as a true competitor to OpenAI, it seems, across the sea. All right. So we are going to get started with Q&A. We got a solid 10 minutes, everybody. We've got a QR code up in the top of your screen that you can add questions to and upvote questions you like best. I'm going to go ahead and kick it off with the first upvoted question today. Can we combine these adapters? I mean, training one to program another for medical and combine together? Let's just sort of talk about combining adapters, I guess. Yeah, I mean, you can model merging exists and basically is that, uh, so yes, the answer is just a simple yes. Um, we, we can't do that. Yeah. Yeah. And model merging is basically like add them together, right? Like, uh, these perturbations, let's say these injectable rank, injectable low rank decomposition matrix. Perturbations to the weights. That's what we're sort of adding together when we do the model merging. And we do have some model merging material that we've gone through recently with the creator and with RC on our YouTube channel. Check that out. Next question. So can we invent a multi-adapter instead of multimodal? How does multi-adapter fit into multimodal? And I think this is sort of a different question here, baked in here, Rami, and have one adapter or have one adapter as a router. Maybe we'll take those one at a time. So multi-adapter for multimodal. Yeah. So probably won't be for multimodal. It's not like each adapter will handle a separate modality, but it is the case that we can create a multi adapter system instead of multiple models. Um, but, uh, in terms of like getting a vision model or like a audio model as an adapter to like a language model, it's not going to work. We need to have that image modality or language modality as part of the model already. Um, uh, and then having one adapter as a router, having one model that we build a router for that uses the right adapter. Yeah, sure, absolutely. That's possible. We might use like a simple classification model on the head of it in order to route it to the correct adapter. But that's still a space that's being explored very much by people well i mean that kind of that kind of you know reminds me of the idea that within the repo we have the function calling capability and of course when we talk about fine tuning for function calling, we can very easily sort of imagine a router being used in a more agentic way, right? And so I think one of the key things that I want everybody to kind of take away that maybe isn't obvious to everybody is that function calling is just another form of fine tuning it just requires what a more specific formatting whiz that's basically it that's it yeah okay all right so uh what's the best gpu to buy uh that's a here's a good one for you liz uh what's the best gpu for small scale industrial application industrial application 4090 just get a 4090 uh you know it's a great it's a great card 3090 will also do 390 ti i think is the 24 gig uh card uh you don't need to spend you know like enough for like a to a6000 you don't need to so So yeah, basically just accumulate cards that have 24 gigabytes of GPU RAM, and whatever flavor is cheapest to you and then go from there. And just stack them together till you have your little 24 gig card cluster. Okay, so Don, Don asks, isn't YAML less likely to handle data format issues well compared to JSON? So we're only using the YAML for like the configuration file. Everything else is in JSON or JSONL and like the data is held in JSONL. We're just using YAML as the config. But yeah, it's just like a choice. YAML, config, name a more iconic duo. I can't. I can't name one. Yeah, yeah. Okay, can we do this without 1B? I know the answer to that. Yes, you can. It's definitely optional. Any other comments on that? Would you like 1B? Yeah, 1B is great. It's free. It works. The, the real, the, the, the real thing to say is like, you should just use one B because it's free and it's great. Yeah. It's funny. Cause we were having the same discussion in class last night. Like, why should we use one B? Why oneb? It's like, I think that's a good enough reason. Yeah, it's free and it's great. Okay, another question from Rami here. Any guide sessions or scripts to prepare and test a standard data set for LAMA, Mistral, or other LLM fine tuning data set formats? I think this is a data set formatting question. I think this is a dataset formatting question. I think probably point you to specific fine-tuning events. We've got a fine-tuning playlist that, you know, if we did Llama, you got to put in Llama format. If we did Mistral, got to put it in Mistral format. We've done other ones like OMO and, you know, a few others as well. I would check out our fine tuning playlist. Anything else to add there, Wiz? No, I think that's a great place to start. It's just a lot of like reading and working, but you'll get it quickly. If we thought like a dataset formatting event would slap, we would do one. I just, I'm'm not this is the first time i've heard that feedback if you guys want it let us know we'll put it together how does the choice of optimizer like adam stochastic gradient descent impact the performance of a fine-tuned laura model is there like a right answer for optimizer? The right answer is Atom or a version of Atom. Just straight up, that's what everyone will use, does use. They have like paged versions. They have fused versions. They have all kinds of fun kernel optimizations that make it very zippy and very quick. So Atom is basically where we're at. Hmm. The here's an interesting question. So since we brought up this attention layer versus MLP layer fine tuning, which one's better? Which one should I do? fine tune attention layer or fine tune MLP? Why do it all? Would you? Yeah, I mean, I guess. You know, target either or if you really wanted to but uh like intuitively like attention feels like the place to start but i don't know we we'll do all of it just because it's it's recommended thing to do right it's uh it's easiest and it's the lowest uh memory and we're gonna be fine to be very clear we're gonna be fine tuning those layers no matter what it's just whether or not we're doing full fine tuning or laura adapter fine tuning that's different but they're they're gonna get fine tuned uh it's adapter fine tuning that's different but they're they're gonna get fine-tuned uh it's we have to so there you go boom there it is that's a great place to wrap up thanks wiz for showing us mr fine tune today and that brings us to the end of today's event don't forget to like and subscribe and ring that bell if you like this session and you're not in our Discord yet, you should definitely join. We got great community vibes going. And I'd really love to see folks that join also consider building and deploying their first ever LLM application. This is the Beyond Chat GPT event that we put together a while ago now at this point. And it's something that we require for everybody that takes our AI engineering boot camp. So if you're up for a challenge, I would encourage you to see if you can build and deploy your first LLM and share it in Discord in the Build Ship Share channel. There's a ton of awesome activity going on all the time with folks building their very first application. Now, if you really want to accelerate your AI engineering learning, you might check out our AI engineering bootcamp. We've got a lot of great, cool, fun, interesting announcements coming soon. We just launched cohort three, cohort four is in August. So you can start planning for it now. If you want to learn with me, with a great group of peers, AI engineers, leaders, and many others, as well as get access to really high quality opportunities to get in front of hiring partners based on your certification. Consider this as a pathway in 2024 for you. Next week, we talk loss functions for our logits and loss event, all on training and fine tuning. We're going down deep into the transformer again. So join us for that one. And finally, provide any feedback that you have. We take it seriously and we try to improve all the time. As always, in the meantime, we will do our best to keep building, shipping, and sharing. And we hope that you do the same. Thanks, everybody. Have a great rest of your week, and we'll see you all real soon. Bye, guys.", "datetime": "2024-06-09T20:20:27.378549"} | |