File size: 110,211 Bytes
316597c
 
1
2
3
{"url": "https://www.youtube.com/watch?v=Anr1br0lLz8", "transcription": " Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.", "title": "RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1", "duration": 3842, "uploader": "AI Makerspace", "upload_date": "20240207", "description": "GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively!\n\nSpecial thanks to LangChain and RAGAS for partnering with us on this event!\n\nEvent page: https://lu.ma/theartofrag\n\nHave a question for a speaker? Drop them here: \nhttps://app.sli.do/event/2rLa8RML994YsMQt1KLrJi\n\nSpeakers: \nDr. Greg, Co-Founder & CEO\nhttps://www.linkedin.com/in/greglough...\n\nThe Wiz, Co-Founder & CTO\nhttps://www.linkedin.com/in/csalexiuk/\n\nJoin our community to start building, shipping, and sharing with us today!\n https://discord.gg/RzhvYvAwzA\n\nApply for our new AI Engineering Bootcamp on Maven today! \n https://bit.ly/aie1\n\nHow'd we do? Share your feedback and suggestions for future events.\nhttps://forms.gle/ryzhbvxZtbvQ4BCv5", "datetime": "2024-06-09T23:18:31.510051"}
{"url": "https://www.youtube.com/watch?v=7N72dvQ7lDg", "transcription": " Yo, Wiz, true or false? Transformers are basically just fancy classifiers. Yes. That's right. Okay. All right. How about technically the loss function used to train transformers and the loss function used to train classifiers are the same. They share very, very, very similar roots. Absolutely. Oh man. Okay. I'm excited for this one, man. Are you pumped to kind of get into the next token today and finish out our story of the transformer? I know I am. I'm at a loss for words, Greg. Oh yeah. All right. Let's get into it. Can't wait to have you back in a bit, we'll be discussing throughout the day. We will be digging in to the deets and we are here to discuss really one of the most important things that's at the root of all of the applications being built today in AI. My name is Greg. That was the Wiz. We are AI Makerspace. Thanks for taking the time to join us. Today, if you've got questions along the way, please drop them in the Slido link. Please drop them in the Slido link. Please drop them in the YouTube live chat. This should be a fun discussion, and we're excited to sort of kick off a new series of deeper dives into the Transformer and training inference and beyond. Everybody, welcome to the next token, how LLMs predict. So today is going to be kind of a fun one as we align ourselves to the sesh. What we really want to do is we want to kind of bring back the different pieces that we've talked about already, if you've been following, and we've got available if you want to dive deeper. We are almost at the end of the transformer stack. And once we can put in something to the transformer, get something out, we can do inference. And what's really cool is that once we can do inference and we can see what goes in and what comes out, then we can really talk about, well, how do we sort of compare what goes into what comes out, in and out, and optimize that, essentially do training. So we'll talk about what goes in, what comes out, exactly how we're predicting the next token after we're done with our decoder blocks. And then what implications does this have on training? And how can we connect this back to the classic roots of ML and AI that we all learned early in the game? So first we talk Transformer, then Token, then Loss. This should be fun, everybody. Let's get into it. So Transformer, of course, we see this kind of thing a lot. It's terrible. I personally can't stand it. I hope you can't stand it too, if you're watching our channel. And we've sort of discussed, actually, this is not the one that is mostly out there. The GPT style is the decoder only stack. We've talked about this in a previous event that you guys can check out. But if we zoom in on this, what we can see is we can see that there's some stuff sort of going in here, some embeddings, some positional encodings. We've talked about this stuff in a big long event we did talking about the Lego blocks, but we kind of got stuck because it took so long to talk about it. As we go into the decoder block, we run into masked multi-headed attention and just generally self-attention. We went into quite a bit of detail on self-attention, the QKBs and all of that in a previous event as well. Today, we are excited to finally dig in to the top of the stack here. And the top of the stack is really where we're predicting that next token. Everybody just says, oh, you know, the LLMs aren't so smart. They're just next word predictors. Well, let's take a look and see if that's true today. So as we go from in to out, we want to kind of see how this comes together. So as we start our journey into the next token here, I want to kind of flip this thing up on its head. One of the things that the best visualizations out there suffer from is consistency and which way is up and which way is down. So let's actually just go ahead and flip this bad boy on its head so we can get a little more intuition about how this is actually operating and we can get some better visualization of exactly how things are moving through the entire stack in a more realistic transformer. This is, of course, from the attention is all you need paper. There was this really great visualization put out recently that we're going to use today, and we're going to focus on taking a look at nano GPT. This, of course, looks like gibberish right here, but we're going to kind of focus in on how one of these things starts to come together. A little bit more complicated than the paper, a lot less complicated than what most of the stacks out there are actually looking like today. And if we sort of look at this nano GPT stack, we can see that there are three decoder blocks here. And these go through self-attention, add norm layers, feed forward network, aka multi-layer perceptron. And we're sort of moving through this stack. Now, like conceptually, very cool stuff is happening. What are we paying attention to and why? And we've talked about that in previous events. Today, we're getting a little bit more down in the weeds. And what we really want to do is want to kind of, again, focus here on this piece of the stack. So I want to take a closer look at NanoGPT with you. Now, if you're not familiar with NanoGPT, this is something that Andr\u00e9 Carpathi built, straight ledge, obviously, in the field. And it's a great one that we used to start with in a former class that we really were digging down into the details of it. It's a great one to start with in general. And I'm going to go ahead and show you guys what this thing looks like. If we look at our visualization and we kind of see it here, highly recommend that you check this thing out and you play around with it if you're interested in this kind of stuff. But if we kind of zoom in on the top here, what we can see, and this is a simple example, okay, what we can see is we can see that what we've got is we've got sort of our embedding representation here. Now we're trying to solve a very simple problem here. We're just using character level stuff and we've only got sort of three different characters abc that we're dealing with but we don't really need to pay attention to the specific use case as much as we just want to look at the structure here okay so what we have is we have our sort of our tokens of length t here all right that's that's going to be pretty important as we continue on. And what we've also got is we now have sort of for each token that we have in our sequence, we want to have an embedding representation of that token. And so if we look at our input embedding representation here, what we see is we see that we have T columns and we have C rows. Okay. So that means we have the number of columns equal to the number of tokens. And then we take our token embedding representation. We take our positional encoding information, smash these bad boys together and we have a number of tokens in the sequence by embedding length representation. Now this is pretty cool because this is the representation that we need to sort of stick with throughout. Because what happens is as we start to move into our transformer, what we see is that we see like there's a lot of stuff going on in the transformer here we can kind of see that if we track our embedding representation t by c we have a similar t by c layer norm representation we have a similar t by c projection that's coming out and a similar t by c pass to the next transformer block so you can look at attention you can look at the projection but more generally it's it's interesting to look at this as a whole transformer block the transformer zero block and you can kind of like scroll down as we sort of already talked about we've got transformer one transformer two zero one two transformer three and so what's kind of most interesting to us today is that when we get down to the very bottom of our output to the very bottom of our output, what we see once more is we see our good old friend T by C, tokens by embedding length representation. And so the question is, okay, well, what's happening to this exactly and how's it being transformed to give me the next token and this is sort of the crux of the issue that we want to dig into today all right and so we see a number of pieces to our puzzle um but we're going to sort of discuss how to move through these specifically discuss how to move through these specifically to get to a logits representation that's going to give us everything that we need to decide well what should our next token be Interestingly, once we move through our decoder stack, we actually need to decode. That is, we need to decode our T by C to decide how to choose the next token. Because what's happened in T by C is I'm in an embedding space, right? But what I need is I need to get back into a space of vocabulary, a space of language. Embedding space is a vector representation. It's not a natural language representation. So how do we go then from tokens or from a distribution, from tokens or from a distribution, this embedding space representation from two tokens. How do we go from this embedding space representation to tokens exactly? How do we do this? Well, in order to get some insight into this, let's zoom in a little bit. Recap. C by T goes in. C by T comes out. Now, as we look along the stack here, we see that we have language model head weights. And language model head weights are of size n vocab by into this embedding representation to get this distribution. Because when we multiply n by c times c by t, we get a vocab by token list. Okay. by token list. Okay. So this is, and for this, you'll recall, we've sort of got one, two, three vocabulary letters, A, B, and C. And then we've got sort of our token sequence length here. sequence length here. And so this is pretty simple from a matrix algebra perspective. In fact, Wiz would tell you it's pretty simple from a programming perspective, and you'll see that today. And maybe it is. What's really cool here is that we're sort of moving from this embedding space back to this sort of distribution. And this distribution then incorporates our vocabulary information. And what we can do is we can take this sort of unnormalized distribution of which word has the highest probability to be the next word and we can sort of somehow decide through these next two steps. So if we think about this idea of going from logits to Next Token, we're sort of talking about going from distribution to Next Token. And the way we want to think about doing this is by picking the most likely Next Token. Okay, but how do we pick the most likely next token? Well, the answer is the soft maximum. That's right, the soft maximum. We get the maximum, but we also get everything else too. We get the stuff that's kind of close to the maximum. Now, this is still a naive programming technique. That is one that prioritizes imperfect shortcuts for the sake of speed, simplicity, or potentially lack of knowledge. speed simplicity or potentially lack of knowledge but it's not a greedy algorithm like the hard max would be if we use the arg max this is just going to tell me the absolute highest probability and so this sort of if you use that sort of argmax hard version, it's just going to say, okay, boom, this is the most in the most predictable possible way right you'd be pretty boring wouldn't you you kind of sound I don't know a little bit like a robot a little bit like a machine and this is maybe not super ideal. So a softer max is a way to sort of say, well, maybe sometimes we don't want to pick exactly perfectly this top most likely next token. Maybe we want to add a little flexibility, a little softness to this. And there are additional ways that we do this, of course, in state-of-the-art models. Now, what's interesting to note here is that we could take this set of logits and go directly to a token by building a classifier head. We could do that. Only we choose not to do that here. Weird. And it has a little bit to do with softness. But I want to bring Wiz back up to talk a little bit more about how he views the world here. And I think the right way to sort of get into this is, what do you think the right way is to select a next token in general? Because that's kind of an important question, isn't it? Yeah, I mean, if I had the exact answer, I bet I'd be, I'd be on a cruise ship somewhere or yacht somewhere. But I mean, there's not a real correct answer here, right? How there being a right way to select the token, I think is opposite to our intuition that our intuition would say, you know, like, that's very situation dependent. It's kind of dependent on what came before, yada, yada. The idea is that there's not like a perfect way to select the next token. And in fact, you you might claim like well what if i uh you know what if i i always want to choose what the correct token would be as for my data well that's still not necessarily the right token because that's just going to reproduce our data set which is not uh traditionally a very useful thing if we wanted a data set, it would be much easier to just print it out, right? Then build such a complicated algorithm to print it out. That's right. So this idea of the right way is kind of like, well, I mean, in an inference space, it's not necessarily clear. And this is sort of a lot of where kind of prompting comes into play, a lot of flavoring it like, hey, you want to think about you're talking from this role, or you have this feeling, or you want to make the customer feel like this or that. And this is what is so interesting when you dig down deep into here is that we have enough flexibility there to be able to do this. And then, of course, the number one parameter that people talk about all the time, because I think it's like the one you get access to with Open AI, maybe that's why, is temperature. And I guess technically, we could call this a smoothing of sort of the distribution that's happening down there. This temperature idea where like a higher temperature makes a little more uniform and a lower temperature temperature will make it a little more concentrated on the higher probability points. How should we think about, you know, using temperature exactly when we're building these applications? Yeah, it's exactly what you described, right? So this idea that temperature scales are logits. So if we have logits and we scale them by a very, very big number, which is to say we divide them by a very big number then they're going to kind of get closer to quote get closer together right and so when we soft max that the distribution is going to be a lot smoother or flatter right more close to uniform and because we're picking tokens from that distribution, we're going to be more likely to pick tokens that, you know, that are quote unquote further down the line than say the token that was scored the highest. And the opposite is true, right? If we divide our logits by a number less than one, they're going to become bigger, right? It's the same thing as multiplying them by that factor. So the idea there is if we have a lot of space between each of our logits, right, where the biggest one is going to grow proportionally more than the smallest one grew, we get this idea of a very sharp probability distribution where our most likely token now dominates the probability distribution is much more likely to be selected. And what that means in practice is that we're more likely to get the expected next token when we have a very low temperature. So a temperature close to zero. Obviously, we shouldn't have a temperature of zero. So we can't divide by zero. But many libraries, you know, deal with that with with epsilon or whatever. But the idea is, you know, if we, if we always want the model to give us what's expected, we should choose or sorry, not always to be very likely to give us what is expected. We choose a very low temperature. If we want it to sometimes make like a left field choice in terms of the token, right, that it's going to select, we should use a high temperature. This is often expressed in kind of a simplified form right as the model will be more creative if you have a higher temperature and it will be kind of more like you know follow the the flow chart if you've got a very low temperature and we'll look at some examples in the notebook of exactly how that that looks yeah yeah yeah yeah okay so then you know if I'm thinking about the distribution here if I just wanted to like zoom way in on this and I just just wanted to sort of look at, okay, I've got sort of A, B, and C here. Yeah. You know, what's the sort of way that you think about? So I'm imagining this some sort of normal distribution across each of these, you know, vocabulary choices. And I'm sort of, you know, randomly picking one, but the distribution takes care of the fact that the more probable it is, the more likely I am to pick it. Is that the right way to sort of think about this? Sure. I probably would maybe move away from normal but like uh yes there's a distribution that has a peak and that has a you know a low end that's the beauty of softmax right whatever we have can be fit into that into that zero to one box so it can be expressed as a probability distribution it doesn't mean it has to be normal right with a very high temperature we're going to kind of move more and more towards actually uniform distribution where every token is as likely as the next token. Obviously, your temperature has to be exceedingly high to get to that point in a lot of cases. But that's the idea where we're at some point when we sample from that distribution,'re gonna get a token at random right any of them will do uh and uh you know it's like rolling the dice which token we're gonna select versus with a very low temperature is a chance right where is a chance we're gonna get these kind of like 0.00 to to the minus, you know, tokens, we're probably going to get the one that's 99.9 repeating almost to infinity, right? So it's, this is the idea when we're thinking about that, that softmax that comes after we scale our logits with that temperature value. And this is something else we'll discuss a little bit in the notebook is this idea of we also have another unit we can manipulate, which is that, you know, top K, right? So when we're talking about our tokens, not only can we think about temperature, but we can also say, well, even though we want some variability variability we don't really want it to pick the 18th most likely token or the 15th most likely token right or we only want it to consider the top five percent right right right because we kind of have this intuition that if you consider you know the the the if you have a vocabulary of 128 000 tokens right if you consider the 128 000th choice uh it's probably wrong yeah right the model's pretty confident about that so this idea that we can also kind of we can zoom into a specific set of tokens uh is is incredibly useful yeah like if i'm speaking and I'm not really sure if the next word makes sense, maybe I'll stay away from it. Yeah, that's right. Yeah, you know, but we want some variability in that kind of, you know, in that top 5% of tokens that, you know, are probably right. We want some variability within those, so sure. Yeah, and sometimes you're feeling bold and you're like, I'm not quite sure this fits here, but I'm feeling it right now. That's right. Yeah, I want to go with. Okay, so one more thing I just want to point out here just to make sure I'm thinking about this right. So this is like a token sequence here, this sort of dimension. And so it's important to understand, I think that we've got sort of this sequence of tokens that is in this representation, but then we're predicting only one token. But there is a bunch in the sequence. So the representation actually is of a larger number of tokens than just the one you're predicting, right? And like what is that sort of... How do you think about that? Is that sort of the context window here that we're sort of predicting? What's the sort of easiest way to think about this from a practical perspective? Honestly, we just care about the last one. So I wouldn't think about it too hard. But the idea is we do compute everything and we only really care about the last one, right? So even though we are going to compute a bunch of things, it's kind of like, well, we don't need to predict it, right? Because we already have it. So it's, there's not a ton of use to consider it. So we just consider the last position as the index of the token that we care about. Yeah. And so then as sort of vocab increases it goes from three to ten thousand or a hundred thousand as sort of the uh the number of tokens in the sequence increases as we go from i don't know let's say 8k to 128k then we start to get into the situation where when we do inference we'll really want to take care of those things that don't matter in smart ways right yeah oh yeah yeah yeah well yeah that's a whole another discussion yeah yeah okay good stuff so by zooming in we can also zoom out and this is sort of the the benefit of understanding some of this stuff you said you just said logits i'm going to dig in and we'll come right back to it wiz let's take another look at how we can think about this from an even simpler perspective now. If we zoom in even further, this from the illustrated transformer, and we're going to flip this thing back up the way it was because Jay Almar put this out in a similar fashion years ago. So if we sort of have this decoder stack output here, now I'm not in a sort of token sequence by embedding representation form anymore. I'm in sort of a, I'm just in a single token, a single word in this case, by embedding representation C. And so as I sort of watch this thing move into and through my linear layer, I'm going to get now this number of words in the vocab that I'm going to be able to assess what's the probability of the next word. And of course the way I did this was I came in and I multiplied the attention head weights number of vocab by C here but instead what we have is we have just this 1 by C instead of T by C and so what happens is we end up in a space where we now are going to have a 1 by C times a C by N, giving us a 1 by N. And so that is going to now allow us to take our unnormalized logits and then normalize them, soft max them into these log probabilities that are going to serve as the basis for which I can do simple things like argmax right or I can do more complicated more flexible more interesting things especially if I have a more interesting problem I'm trying to solve in this case maybe I'm just going to select the word straight up. Okay, so this is sort of the one dimension, one word example that you can sort of think about, look at. And I want to kind of bring Chris back up for a little more discussion here before we actually see how to predict this. So we've got now this logits and this logprobs and this softmax. And I just want to kind of break this down for everybody. How do you think about this? What's the difference between logits and logprobs and softmax? Yeah. So logits are in the worst way to say it, right? But the technically correct is the unnormalized, raw unnormalized scores, right? So this is the idea that like, when we project our output, right, from our embedded dimension to our linear layer. So we're moving it into that vocabulary space. We know the attention that's computed is an attention score. That's what we care about. We care about this idea of a score. What is a value that indicates how much we should care about this thing right we quantify that with the with this idea of an attention score when we project that into our vocabulary space what we wind up with is we wind up with a huge set of scores for every token in our vocabulary right and those scores before we do anything to them are called logics. So this is this idea that they're raw and they're unnormalized. So they're just like values, right? And they can be whatever you want. They can be small, they can be negative, they can be positive, they can be large, doesn't matter. But they're scores. When we move to through softmax, right? So when we move through softmax right so when we move through softmax when we take all of those logits and then we use that softmax uh transformation right or this the softmax layer in this case which is just going to apply softmax to those to those scores right that's how we move to those, the probs or the probabilities, right? This is the idea that, you know, probability distributions famously, right, have a few rules that they need to follow. They need to be between zero and one, right? Because they all have to sum to one, the whole distribution has to sum to one. And so this is the idea that we have to sum to one the whole distribution has to sum to one and so this is the idea that we have to transform this kind of mess which is still valuable still incredibly valuable still useful to think about right we can do cool as we just talked with temperature we can do cool stuff with our logits to to help us change what the resultant probability distribution is right but this idea that we have to eventually go to a distribution in order to interact with it in a way that we're used to. So sampling from it, et cetera. And this is that conversion. So the log probes is ultimately what we're going to use to pick a token, right? And we're just going to sample it from that distribution. So that means that- All the jobs, aka Logits softmax. That's right. That's right. Yes. Yeah, absolutely. Yeah. Killer, killer. Okay. So you're going to show us how to disencode? Is that right? We're going to move through it. Yeah. We're going to show everybody. Yeah. Okay. All right. So next up, guys, we've got actually predicting a token. And I'm going to hand this one right off to you, Wiz. Take it from here, man. All righty. Thank you, Greg. So we've got our Colab notebook open. It'll be shared in the chat for you guys. And the idea of this is pretty straightforward. We're going to approach it from kind of the most, the way that we use it, and then we're going to approach it from a from kind of the the most the way that we use it and then we're going to we're going to needle down so first of all we're using the lgpt from carpathy it's just like the best it's nice and minimal it works you'll love to see it we're going to get the additional dependencies that are required for this repository that aren't included in colab the rest of them are just included in colab by default. You will need a GPU environment for this in order for it to work well. We'll clone the repo. We'll move into the repo. And then we'll just look at how it works, right? So we'll say things like we can sample from GB2XL. We're going to put an input sequence of this text. What is the answer to life, the universe, and everything? And then we're going to want a response of one generated sample. And we're going to want a maximum of 100 tokens. And we get like the classic, right? So we get the, what is the answer to life, the universe, and everything? That's what we asked. One possibility is that they have a universe-wide perspective that allows for no contradictions. And if so, then why is there... I mean, this doesn't make sense, but it's related to the question and it's got words like Big Bang Theory. So we're clearly in the right place. This is just GPT-2, right? We've learned to expect a lot more from our language models. But the idea is that we pass in text, and then we receive out text. Now, there's a lot that goes on in between. So let's take a little bit of time to look at it. Number one, you know, we have to get into this. We have to get into this idea of auto regression, right? Causal, right? So these words, what do they mean? And basically what they mean is you take an input and generate a single token and then append that token that was generated back to the input and do it again over and over and over again. Auto regressive, right? So this is the idea. So how does this manifest in code well if we look at the generate function in the nano gpt uh library we're gonna get an insight pretty quick so the generate takes some uh list of indices it takes a max new tokens it takes a temperature and it takes a top k a temperature and it takes a top K. The indices are the actual things we have in our sequence so far, right? So in this case, indices are going to be the tokenized output of this. This bit right here. So what do we do with those? Well, you know, our max new tokens comes in pretty quickly. We just repeat this step, right? A number of times equal to the maximum new tokens, easy peasy. And then we're going to do some truncating. So what we're going to do is make sure that the index fits in our block size. If it doesn't fit in our block size, we're going to do is make sure that the index fits in our block size. If it doesn't fit in our block size, we're going to have a bad time. And the way that we do that is we just lop off the top of it if it doesn't fit in the block size. So we can keep generating forward, but we're going to lose context from the beginning, right? So we talked about this in our long context event we did last week. If you want to go more into the context window. Then we're going to get some logits, right? So classic. We need our logits. Without logits, what are we doing? So the way we do that is we do a forward pass of the model, right? The forward pass of the model is just going to give to us logits and it is going to give to us a potential loss. In this case, loss is going to be none. So we just chuck it into this wildcard, don't care about it. So now we have logits. Remember, the logits are the unnormalized scores. So what this means is that they are just the, they're just a representation, right, of how highly certain tokens scored. And the logits exist across our whole vocabulary. So every token in our vocabulary has a score, right? So this is the idea. We're gonna just not care about all of the other logits in our sequence and we're only gonna get the final set of logics, right? Logics are the next token. So this is the last position in our logics, which you can see here. I know we're kind of dealing with like these 3D arrays, right? So we got these sweet tensors. Thanks, transformers. But the idea is, right, so this is for our batches. So we want to include all of our batches. We only want the last element, right, of our sequence. So that's the last token, what will become the next token. And we want all of the logits for that next token, right? Because that's how we're going to determine what the next token is. So we're then going to scale it by our temperature. We'll look in a second how that math checks out and then of course we're going to use our top k so top k all it's doing is saying you know hey uh we're gonna we're gonna get our top k from our logits and we're gonna either choose our top k or whatever the size of our logits are. This is to say, if we select a top K that is larger than the number of elements in our logits, then we're just going to, that's all of the logits, right? Or that's all of the elements in our logits. So this is all this is doing. And then all we're going to do is we're going to say, hey, you know, for every logit, if it's not being considered, we can't remove it, right? We can't eliminate it. We still need that information because it matters how many elements we have, but we're just going to set them to minus infinity. Minus infinity classically is going to be not selected no matter what transformations we do to it, right? Because it will always be the smallest thing. So it will never be selected. But this is the idea. We basically just say, hey, for every logit, right, that is outside of our top k, boom, you're minus infinity. We don't care about you anymore. Then we do the old softmax arena, right? We do this across the correct dimension to make sure that we are doing softmax for all of those elements in each of the logits. And then we are going to sample from that resultant probability distribution to get our next index. ability distribution to get our next index. And then we are going to concatenate our existing indices with the next index. So what does this mean? Basically, the result of this sampled index is going to be the token that we want next. And we are then going to append it to the next token or to the existing list of tokens. So it's going to become the next token, right? And then we just return at the end, whatever we've whatever we've concatenated. So because we're going to do this in the example, 100 times, we'll have repeated this process of 100 times, we'd have added 100 new tokens, and we're going to return IDXx which is now going to be 100 uh tokens longer and that's that's the thing right so but there's a lot of kind of there's a lot of stuff happening right here and then there's a lot of stuff happening uh kind of kind of here right so let's let's do a little bit of a zoom in so what are logits uh greg's walked us through that so that's's great. And what is temperature doing, right? So I think top K is intuitive, right? We choose the top K logits. So if you are among the highest K logits, you will be kept. Everything else turns into minus infinity, right? So what we're going to do with temperature is we're going to see how temperature influences the resultant probability distribution. So if we have this, let's just pretend our logits are 6, 2, 7, 0.1, minus 8, 9, right? We're making stuff up, doesn't matter, right? If we were using argmax, we'd just say this is the one, right? This index is the one we care about uh so this is what our token is going to be right but of course we are not actually uh going to use argmax right that and you can greedy decoding is great you just always pick the most likely next token every time right there's no chance you do anything else that that is a valid decoding strategy though you might find that the the resultant generations lack creativity and and the like uh so what we're gonna do is we're gonna scale these logits by our temperature in this case is just one and then we're going to do our soft max operation to get a distribution you'll see that we got a pretty wide range of values everywhere from uh you know 8 8.43 to the negative one, all the way down to this, you know, to the negative eight. So very small value. The idea here is that this is the highest, right? So nine is the highest, but the six is, you know, it's not doing so bad, right? We have this seven, right? That's doing a little bit better. So it's not like nine is the only thing that matters in that distribution, right? It's a, if we want to be very crass about it, right? It's like 84% chance we're going to select that nine. But that means that there's a 16% chance that we're not going to select that nine, right? So now if we look at a very low temperature, we can see these logits, they get pretty big, right? So we go from nine all the way to 90. And then when we look at our resultant probability distribution, we can see that all of the others are very small, up to and including one that is to the negative 74, and the 9 is 99.9999998% chance to be selected, right? So when we scale those logits by that low temperature, we make the one that is highest, right, much more likely to be considered. And then the opposite, if we use a large temperature, we can see that they kind of all settle into the same kind of range, right, where we have the instead of 99.99 with low temperature, or about 8085 with with with temperature equal one, we get this, you know, it's only 27% likely to be selected when we use a high temperature, right? Versus the second place, which is 22%. So this is the idea of temperature and how it modifies that resulting softmax operation that we do. And the idea here, again, is, you know, this is a smoother, more close to uniform distribution, which means we're more likely to get, you know, indices that we wouldn't have before. And then, you know, we get a very sharp or distinct distribution. There you go. So, this is the idea. So, for generation, we kind of get this pseudocode, right? For some range, which is user decided, we check and make sure our current range of indices will fit our block size. If we don't, we trim it so it does. We get the logits for the provided indices. We scale the logits. We optionally crop the logits. We apply softmax to converter logits into a probability distribution. We sample from that probability. We add the sampled index back to our input indices. That's the auto-regression coming in, right? And then we're done. So how do we actually get to logits? So we understand we're using logits, right? And those logits exist across that whole vocabulary space, but how do we get to logits, right? We know we're getting attention scores when we're using our our uh our our decoder stacks right we're we have this idea that we're scoring things right where where certain certain uh things are gonna be scored higher based on the relationship or providing surrounding context yada yada so the idea is that we have this idea of an LM head and that all it's doing, right, is it's taking the output of our decoder stacks, which is our input dimension, which is this N embed, right? That's the internal dimension of the model. And then it's going to project it onto our vocabulary size. So this idea that we're going to literally project from our embedding, our internal embedding space or representation, sorry, to our vocabulary. This is that's it. This is a learned process to be very clear, right? It is learned. So this is spoiler alert. This is where loss is going to come in, right? But the idea is that we're just projecting to get to that vocabulary space, right? Because it's not useful. The internal representation of the model is not useful outside of the model, right? So we need some way to project it into a space where it is useful. And we're going to use our linear layer to do that. And that's it. That really that is the that's the thing that does it right. There's there's nothing more complicated in nano gpt in fact there is no softmax that is applied in the model right it's applied afterwards now that doesn't have to be true you can do it in the model if you want but uh the uh nano gpt is going to do this as part of the generation as opposed to part of the actual model so there's no softmax layer in uh in nano gpt now you know what's this like how do we get from logits to loss now right so we've got to tie this into loss somehow so the idea is right we get our decoder block to take input and compute attention scores we project those scores from our internal model dimension onto our vocabulary then we use the obtained raw annular scores logits to find a probability we sample we use the obtained raw, a number of scores, logits, to find a probability. We sample, we append the token, we rinse and repeat. So how does this relate to loss, right? Well, how do we know, how do we know what to score things? And how do we know the scores are correct? Or what's their degree of correctness, right? While we're trading, we need something that we can target, right? This is the whole idea of machine learning. We need labels or targets, right? While we're trading, we need something that we can target, right? This is the whole idea of machine learning. We need labels or targets, right? So how do we know to produce certain scores for certain tokens? This is where loss is going to roll in for us. And with that, I'll pass you guys back to Greg. Yes. Awesome. Masterclass on predicting the next token. And as you guys can probably imagine, we're going to continue this series by covering loss in depth, but we want to give you a little bit of an insight into it. Now, make sure that we're leaving you off at a good stopping point. You know, as was mentioned, in order to train, we have to know loss. And, you know, the sort of logic train is something like this. We want to train models to predict the next most likely token. And so then in order to do that, we need this loss function. But what is loss? Well, if we think about these logits, that's our distribution, let's say, we want to measure the distribution that we have, and we want to make it a little bit closer to the distribution we desire. So we have some training data and that training data represents the next words we would expect to predict. We wanna make sure we're able to do that. So how do we calculate this loss exactly? Well, interestingly, and unsurprisingly, if you've got a machine learning background, perhaps, we use cross entropy loss. Now, this is something you might be familiar with. This is going to be the topic of our next event on this, where we dive into lots of depth on cross entropy and exactly what's going on but this does connect back to our initial hot take that lms are kind of classifiers okay when we talk about cross entropy here the question is what are we looking at the cross entropy of between which two things are we looking, and we're looking at the logits and also the targets. And the way that we sort of differentiate the logits and the targets is we can think about the logits as being the sequence up to the next word. And we can think about the targets as being a shifted sequence up to the word we want to predict. In order to train, we need input, we need output. And interestingly, this is what it looks like down in the nano GPT code for the cross entropy calculation. If we don't want to train, that means if targets is not available for us because we're not training, what are we doing? We're doing inference. And when we do inference, we're using that language model head on the very last piece. on the very last piece. So here we can see this sort of auto-regressive nature where we're predicting future values based on past values of our decoder stack transformer. We're shifting as we go during training. And this is what we will give you some insight into in our next event. I want to go ahead and bring Wiz back up to the stage to discuss a little bit about our question we began with. And, you know, we can think about this difference between logits and targets as manifesting in this shift, Wiz, but maybe we can just end on this question and come back uh to exactly where we are next time and you guys shout out in the chat any questions that you have or please use slido but are transformers just classifiers wiz uh what's going on here have we have we seen that they kind of are yeah i mean kind of is exactly right they're kind of class. I mean, kind of is exactly right. They're kind of classifiers. I mean, they do classify right, like, we're not going to use them like classifiers very, very, you know, specifically, and if we wanted to, we'd use a different head. But the idea is, yeah, I mean, they're classifying in some sense, definitely, right right we are predicting of a possible set of uh classes or in this case else com tokens right which is the which is the most likely uh you know of of the batch now the way we're using the model and the auto regression kind of moves into prediction task right but uh the guts of it are very close to uh you know straightforward classification so we're using auto regression and classification then is that is that what we're doing yeah i mean uh it somehow it's not surprising right it's not no we have to pick the next token which means we have to know what is the, given what we have so far, what, what token follows, right? Like it's a, uh, we, we have to pick, uh, of, of a set of tokens, the next one. And so there's, there's fields a little classify even without knowing the internals. Yeah, yeah, yeah, yeah. Okay, well, I wanna go ahead and invite everybody to add your questions and we're gonna go try to cover as many as we can, but question from the audience was, how do we actually train these networks when there are multiple possible answers? Say that again? How do we actually train these networks when there are multiple possible answers? Oh, that's why we that's why we need that probability distribution. That's why we want sample it, right? So the idea is there could be more than one token that essentially works, right? So what if we have two tokens that have, you know, an equivalent, you know, ish probability, they're never gonna be exactly, but like equivalent ish probability, this is the benefit of sampling. Sometimes we'll choose one, sometimes we'll choose the other, right? So it's like synonyms in English or whatever. You know, you, you don't, you can choose whichever of the, of the words you wish to, as long as it makes sense. And using the sampling strategy and the temperature strategy helps us to better emulate that. I mean, obviously, you might be leaning more towards one than the other, depending on like the who you're talking to, or perhaps what you've said previously, right to keep things consistent. But this is the idea that when we're training, and we're using this, this logits softmax approach we actually can capture this behavior where you know two tokens are equally likely or five tokens are equally likely equally in air quotes okay yeah so another training question here people are excited for the next training event i can see wouldn't it be better to train the neural network only up to logits with argmax without using softmax part during the training? Yeah, we don't use softmax part during the training. We just use the logits. Just the logits without argmax? Just the logits. Just the logits. Okay. Because the whole thing we just discussed, right? Yeah, that's right. The distribution and- We give them a pile of labels and a pile of produced logits, and we go from there. Yeah. This last question, how do we incorporate context to make our choice on the next word, I think is what this question is asking. Yes. This is sort of a raggy question, I think. So, I mean, they're the same token, right? Our logits will be high for that token in, let's say to simplify things a lot right we're going to say that the score for that token is going to be high for either context right uh because it can fit in either context so uh you know both when we're talking about you know weight and when we're talking about cash you know uh especially we're talking about Britain in in cash right we're talking about cash, you know, especially we're talking about Britain and cash, right? We're going to see pound to score highly as a potential next token. So this is the idea, right? Like the scores per token are generated based on all that context. So that's why we need to pass in some amount of text first in order to get a score that is good for that for that next token this is why you know things like few shot uh you know many shot work in context learning works right because essentially what it's doing is it's modifying the scores of the potential next token by all this stuff that came beforehand right got it got it okay well so next up was then what what uh so what are we going to cover we're going to cover sort of like dive down deep into cross entropy get a feel for what loss is try to understand exactly how useful this is in training what it looks like and from there we're going to be able to really open up exactly everything that's kind of at the center of a lot of these more complex techniques aren't we there's a lot going on with the loss functions yes we're we're we're finally at a place where we can start talking about loss, and it makes some sense to be able to be tied back to what we actually are going to do. All right. All right. So I'm going to go ahead and wrap up. In conclusion, everybody, thank you for joining us today. We saw that the decoder stack GPT provides us with an output that is in embedding space. We then want to put that output from embedding space into natural language space. We can do that through this calculation of the logits that is going to take into account our vocabulary information from that language model head, and we can eventually choose softly, perhaps, what the next best word is. When we look at inference versus training, we can look at logits that help us do inference, and we can look at what the targets are as we slide that window to try to understand exactly what's going on at the root of training. We're going to dig into that next time as we continue our journey into loss functions and deep down into the transformer, through which we'll see a lot of interesting things, including how to optimize running all of this stuff in a compute-efficient way on your hardware and much, much more. So if you enjoyed this event definitely like and subscribe on youtube that is a wrap for our session today and if you would like to follow along with the discussion that we have actively going on in our community all the time and folks in the chat definitely know this and are part of it then please join our discord community we'd love to have you we've got an event coming up in just a few minutes here with one of our best community members. And we are weekly telling stories about some of the most successful folks. I believe we got shouted out in the chat today, Garrett, and what they're out there building, shipping, and sharing. We just kicked off our most recent version of our AI engineering bootcamp last night. it was a banger. And if you're interested in jumping into the next cohort, please reach out to me and let me know if you have any questions. I'm happy to answer for you. And finally, if you have any other feedback at all, we'd love to hear it. If you have any content you'd love to see, we'd love to know about it. And until next time, guys, we'll keep building, shipping, and sharing, and we hope you do the same. We'll see you soon. Bye, everybody. Have a great rest of your week.", "title": "The Next Token: How LLMs Predict", "duration": 3757, "uploader": "AI Makerspace", "upload_date": "20240530", "description": "Join in to learn about the foundational aspects of prompt engineering, retrieval augmented generation, fine-tuning, and agents, to exploring the technical nuances of LLM operations like prompt tuning and the intricacies of token prediction, this event is your gateway to mastering LLM application building. Discover how to effectively manage applications in production, optimize performance, and intelligently evaluate outputs. Whether you're an AI Engineer, a leader in the field, or simply keen on the latest AI technologies, this session promises a comprehensive breakdown of both the theoretical and practical aspects of modern LLMs. Don't miss the chance to expand your understanding from the fundamental mechanisms of token prediction to the advanced strategies of unsupervised pretraining and beyond. Click now to join our \"Everything is Loss\" series and start unraveling the complex yet fascinating world of LLMs!\n\nEvent page: https://lu.ma/nextoken\n\nHave a question for a speaker? Drop them here: \nhttps://app.sli.do/event/1FixiyoBRqad346PixnFcc\n\nSpeakers: \nDr. Greg, Co-Founder & CEO\nhttps://www.linkedin.com/in/gregloughane\n\nThe Wiz, Co-Founder & CTO\nhttps://www.linkedin.com/in/csalexiuk/\n\nApply for our new AI Engineering Bootcamp on Maven today! \nhttps://bit.ly/aie1\n\nFor team leaders, check out!\nhttps://aimakerspace.io/gen-ai-upskilling-for-teams/\n\nJoin our community to start building, shipping, and sharing with us today!\nhttps://discord.gg/RzhvYvAwzA\n\nHow'd we do? Share your feedback and suggestions for future events.\nhttps://forms.gle/WmVKUE3bfAoi1KDS8", "datetime": "2024-06-09T23:42:03.363226"}