| speaker,time,text | |
| Hamza Farooq,00:00:29,"Alright, can everybody see my screen?" | |
| Hamza Farooq,00:00:33,"Yes. Sorry. Okay. I didn't see it. Okay. So we are the last part of our course which is dot product, prompt engineering versus fine tuning." | |
| Hamza Farooq,00:00:46,"There is some technical stuff that I will try to cover. But I there's too much technical stuff over here that I can cover in a one, you know, in one session there's a lot of math involved, but what I'll do is give you an intuition about what all fine-tuning is. And" | |
| Hamza Farooq,00:01:05,"let's let's talk about" | |
| Hamza Farooq,00:01:08,"where we have been so far, what? What we have covered." | |
| Hamza Farooq,00:01:13,"Move the slides up and down. Okay, so what we have done so far." | |
| Hamza Farooq,00:01:19,"we" | |
| Hamza Farooq,00:01:21,"covered encoders." | |
| Hamza Farooq,00:01:23,"We covered a decoder models. We looked we had a guest speaker come in, speak about the transformer architecture and how it is built. So on and so forth." | |
| Hamza Farooq,00:01:32,"We build retrieval systems from scratch. We looked at retrieval system with BM. 25. We looked at retrieval system in different forms." | |
| Hamza Farooq,00:01:40,"there were some Api endpoints that be discussed with end of each class about using hugging using radio as an endpoint you have access to all the code the decoder models that we looked into were most more to use in a rack situation" | |
| Hamza Farooq,00:01:59,"where you could call a certain Api endpoint of of product and use that. And then I think, the last us we spoke a lot about Jean" | |
| Hamza Farooq,00:02:11,"janet came in and he spoke a lot about how to, you know, evaluate different forms and different Lm applications." | |
| Hamza Farooq,00:02:22,"So one thing I did is from our last course. I'll" | |
| Hamza Farooq,00:02:29,"I think. You should be able to get access to that. I will make sure" | |
| Hamza Farooq,00:02:35,"it's there." | |
| Hamza Farooq,00:02:37,"I think so already. So if you're going to cohort" | |
| Hamza Farooq,00:02:41,"boys." | |
| Hamza Farooq,00:02:44,"you should be able to see a new module that I've put in, and this is called a overview of evaluation metrics." | |
| Hamza Farooq,00:02:52,"And if you go here you can actually see the slides that Janine presented." | |
| Hamza Farooq,00:02:58,"So you have. You can sort of go through them yourself. Take a look. I think he has done a great job in" | |
| Hamza Farooq,00:03:05,"talking about it. So something I want to mention is that" | |
| Hamza Farooq,00:03:10,"I would love for you all to sort of think about the benchmarks, don't you? Don't have to implement them. But think about the benchmark that you can include in the Llm. Applications or any form of applications that you built for the capstone." | |
| Hamza Farooq,00:03:22,"It's not a compression for you to do other capstone. I'm not going to push you all to think about or do it, but at least what I would love for you to do is just" | |
| Hamza Farooq,00:03:32,"push yourself enough so that it's a well rounded effect of your learning. Some of you have business implications that you cannot share your business data, or you know something like that. That's not what I'm looking for. What I'm looking for is you to have implemented the knowledge that you have learned in this class into something" | |
| Hamza Farooq,00:03:51,"again, and it sort of helps to, you know. Talk and discuss about it. I'll give you an idea, Bill. One of my students, Bill, he billed a really good product. That looks into Amazon reviews and recommends products." | |
| Hamza Farooq,00:04:06,"A day later Amazon also launched the product which looks exactly like that." | |
| Hamza Farooq,00:04:11,"Of course Amazon did not copy him. You know it. It would take Amazon more than a day to copy someone's hard work. But but the idea that he has the same initial sense, an intuition of what to build a day before it, you know a large company did it themselves so, and Bill had taken it further, he is doing image search also, so I think I will have him come on the demo day and present his project to you all also. So" | |
| Hamza Farooq,00:04:41,"so what I would like for you all to do is when we talk about the capstone project. Just come some, if you intend to" | |
| Hamza Farooq,00:04:50,"do a demo project, please submit one, and then accordingly, I'll try to either" | |
| Hamza Farooq,00:04:57,"converge you with the next cohort, so you all can do a demo day together, and you have time to build one" | |
| Hamza Farooq,00:05:04,"right? So it won't be like you have 2 weeks or so. You might have 5 to 6 weeks to prepare for a demo day, and that demo day will basically give you access. One more thing is I want to mention is that any future guest speakers that we will have, you will just get an invite to them." | |
| Hamza Farooq,00:05:19,"So just because I haven't had any guest speakers, you know, or I had just 2 of them this time. You can. You can. You'll get access to whosoever." | |
| Hamza Farooq,00:05:27,"Okay." | |
| Hamza Farooq,00:05:28,"So I believe this is all what we have covered so far today's class, I move to slide. I don't know why. Today's class. We wanna talk about few things which is number one, prompt engineering number 2 fine tuning Llms. Number 3," | |
| Hamza Farooq,00:05:44,"theft number 4, validation metrics number 5, code Walkthrough and fine tuning models. We'll talk about a local Llm. And II there, I've already shared this quote with you all the the Gpt version how to find you in a chat. Gp. But you can sort of pick it up from there. But we have one of my Susan who's joining in, who will be taking you all? How to fine tune a mistral." | |
| Hamza Farooq,00:06:14,"and to push that date. Hey, Dashi! And to push all that data onto hugging face like the moral onto hugging face. Also. that would be sort of also give you the opportunity to say, Hey, we built a model and we pushed it to to hugging face. Okay." | |
| Hamza Farooq,00:06:38,"okay, so what is prompt engineering." | |
| Hamza Farooq,00:06:43,"Have you all who over here has not used prompt engineering yet." | |
| Hamza Farooq,00:06:47,"or have not been exposed to in any form?" | |
| Hamza Farooq,00:06:51,"And if you have, what has you your best practices been for prompting anyone" | |
| Thierry Damiba,00:07:02,"definitely? Oh, go ahead." | |
| Maaz Amjad,00:07:04,"But go ahead, please." | |
| Hamza Farooq,00:07:08,"Theory. Go ahead." | |
| Thierry Damiba,00:07:10,"Oh, I was. Gonna say, it definitely depends on the use case. But most of my prompt engineering has been trying to get output that fits whatever my use case is. So." | |
| Thierry Damiba,00:07:21,"for example, I was working on a project where the tragedy would tell a story. and I had to do some messing around with the prompts because" | |
| Thierry Damiba,00:07:32,"it didn't always want to tell a story. Sometimes it just wanted to answer a question, or sometimes it wanted to tell too long of a story. Things like that." | |
| Hamza Farooq,00:07:41,"Yep." | |
| Hamza Farooq,00:07:44,"yeah. So" | |
| Hamza Farooq,00:07:47,"so this" | |
| Hamza Farooq,00:07:49,"is problem number one. When you do, I mean." | |
| Hamza Farooq,00:07:52,"I'll explain what prompt engineings. And then we'll go into the problems front engineering involves designing and refining language models prompt to achieve specific desired outputs. It includes crafting prompts that provide clear instructions, contacts, or constraints to get the models. Response." | |
| Hamza Farooq,00:08:07,"I think 6 out of 10 times a prompt box" | |
| Hamza Farooq,00:08:14,"4 out of 10 times a prompt does not work." | |
| Hamza Farooq,00:08:17,"Ii don't know if you all have have that experience. But there are a lot of times that the prompt does not perform in the way that you would like to like for it to perform. Or you know, if you're using Chat Gp, you will see. One result would say," | |
| Hamza Farooq,00:08:31,"based on the based on the based on the context given to me by you. Here are, some of the answers, and you don't want to see that you don't want to see that, and you will try very hard to get through that part, but" | |
| Hamza Farooq,00:08:44,"it's there's no actual given way that will sort of say, Oh, this is the best form to achieve that" | |
| Hamza Farooq,00:08:52,"right? So I would say that" | |
| Hamza Farooq,00:08:55,"prompt engineering exists." | |
| Hamza Farooq,00:08:57,"This is the conversation between customer and a polite, helpful customer service agency input indicator question of the customer output indicator, response for customer service, you know. So and so forth." | |
| Hamza Farooq,00:09:09,"You would read through the language model and create an output for the, for the, for the completion." | |
| Hamza Farooq,00:09:14,"That's what you essentially do when you just use chat. Gbt, you know you say I want you to act like a Harvard Howard stew. You know, Howard, Professor, and I would like you to, answers Abcd." | |
| Hamza Farooq,00:09:28,"Or I want you to answer in such a style. That's all prompt engineering. What you're doing is that you're just giving instruction." | |
| Hamza Farooq,00:09:38,"So so some of the some of the problems with it is that" | |
| Hamza Farooq,00:09:42,"prompt engineering is basically an art 10 sites." | |
| Hamza Farooq,00:09:46,"and the better you are at it. In terms of English, the better you are able. So promising seems to be difficult for some machine learning researchers. This is not surprising, because prompt engineering is not machine learning. Prompt thing is the opposite of machine learning." | |
| Hamza Farooq,00:10:02,"And what that means is, it's truly" | |
| Hamza Farooq,00:10:06,"how good your English is, or the language that you are coding it." | |
| Hamza Farooq,00:10:10,"The essential part of when you work when you try to do prompt engineering." | |
| Hamza Farooq,00:10:16,"a lot of folks, even on the scall. or some of you in this call English. Is not your primary language right?" | |
| Hamza Farooq,00:10:24,"That's your show of hands. How many people does not have English as a as a primary language. I want to say 1, 2, 3, I say, there are at least 10 people on the call. Right?" | |
| Hamza Farooq,00:10:36,"The number one problem with people who who's English is not a primary language. You don't think I mean as good as your English is." | |
| Hamza Farooq,00:10:46,"Sometimes there are things that are not as fluent as you would like them to be." | |
| Hamza Farooq,00:10:51,"And here's the beauty. Pront engineering was built by Corpus of English language." | |
| Hamza Farooq,00:10:59,"So there's the English that was fed to that prompt or that model that would sort of learn onto becoming your context, for prompt engineering was built on English. That is spoken a lot more in us" | |
| Hamza Farooq,00:11:16,"than any other part of the world." | |
| Hamza Farooq,00:11:18,"So there are intrinsic things on the way. Prompting has been made due to year years of data of that was used to train these models." | |
| Hamza Farooq,00:11:30,"It requires a specific version of English" | |
| Hamza Farooq,00:11:33,"which is not consistent amongst any one of them in this class. So" | |
| Hamza Farooq,00:11:38,"if I ask you to write, make a chargeabitty" | |
| Hamza Farooq,00:11:43,"talk like a Harvard student or a Harvard professor. I can assure you that each one of us will have a different way of writing that prompt, because No. 2, 2 2 ways of writing similar." | |
| Hamza Farooq,00:11:54,"This is the problem with prompting. Prompting is not something which is sustainable. It breaks" | |
| Hamza Farooq,00:12:01,"and it all comes down to one reason why, because the Llms" | |
| Hamza Farooq,00:12:08,"are not deterministic. They are probabilistic." | |
| Hamza Farooq,00:12:12,"I'm gonna repeat it again." | |
| Hamza Farooq,00:12:15,"They are" | |
| Hamza Farooq,00:12:16,"probabilistic, not deterministic." | |
| Hamza Farooq,00:12:20,"Which means when you run a machine learning model." | |
| Hamza Farooq,00:12:24,"Once you've trained the weights every time you have trained the weights you can reproduce the exact same output." | |
| Hamza Farooq,00:12:32,"I know there's some of you who are Terry. I wanna say you do a lot of Ml. right when you let's say you use actually Boost or Lgbm, or whatever whatever you do right." | |
| Hamza Farooq,00:12:43,"If you have, once you have created the model. That model will always predict the exact same thing for that unit test." | |
| Hamza Farooq,00:12:51,"Right? Is that correct like? It will never give you any different answer." | |
| Hamza Farooq,00:12:55,"because the weights are almost fixed, or they are fixed. The thing with elephant is, they are so nuanced and so complex" | |
| Hamza Farooq,00:13:05,"that you can." | |
| Hamza Farooq,00:13:07,"And because you're using a community server. Always remember you use a community server when you use chat. GPT. That is dedicated. You know that you go to the interface. You're using a community version of that product" | |
| Hamza Farooq,00:13:20,"which means the weights are continuously changing. And it is almost impossible for you" | |
| Hamza Farooq,00:13:27,"to recreate that exact output, using the same prompt, maybe 10 min apart." | |
| Hamza Farooq,00:13:35,"What that means is that this model is continuously changing its weights. It's continuing, changing the way it is supposed to create a output." | |
| Hamza Farooq,00:13:43,"So what happens then?" | |
| Hamza Farooq,00:13:46,"Pretty clear? You will not have the ability to produce the same results that you want." | |
| Hamza Farooq,00:13:52,"Now, there is also the nuance. Let's say you decide to host your Llm. Yourself." | |
| Hamza Farooq,00:13:58,"But again, the weights and the weight has been created. We haven't done a lot of that experiment. But the way those weights are created, or the way the Lm. Works, it might still not give you the same prediction." | |
| Hamza Farooq,00:14:12,"Even if you host, your Llm. Yourself. So" | |
| Hamza Farooq,00:14:15,"prompt engineering is the sort of the first line of defense when you're building. You know your model, and you're trying to explain some of the things in that. But you should always know that" | |
| Hamza Farooq,00:14:26,"front engineering will always be one step behind in what you're trying to create. because" | |
| Hamza Farooq,00:14:33,"if you build something in production, you know, like there, we tried to build something in production using front engineering, and we have honestly not been at the best in trying to trying to do." | |
| Hamza Farooq,00:14:45,"I'm gonna pause here. Any questions" | |
| Hamza Farooq,00:14:51,"what's going on?" | |
| Dino,00:14:53,"I think so" | |
| Maaz Amjad,00:14:56,"as I'm losing you?" | |
| Maaz Amjad,00:14:58,"So the question is, I think that's that's a very good point. And and spi specifically." | |
| Maaz Amjad,00:15:07,"go ahead. I can ask like later, you you can ask" | |
| Hamza Farooq,00:15:16,"Dina. Go ahead." | |
| Dino,00:15:17,"get a little bit in more detail around." | |
| Dino,00:15:23,"even if you post you on Llm. Like, what are some of these attributes that would cause" | |
| Dino,00:15:28,"some of the variants, because I hear I've been hearing a lot of hosted, and we'll have." | |
| Dino,00:15:33,"you know, a more predictable, more stable, deterministic output, and which I've seen is not the case, but just like to understand, like the mechanism, why do we get some that variance? Even if people stay locally? Can you repeat the question I sort of." | |
| Hamza Farooq,00:15:52,"I wasn't able to get you." | |
| Dino,00:15:55,"Yes, sorry about it. Can you" | |
| Dino,00:15:58,"explain a little bit more detail, even if you post the moderately. Why, you regret" | |
| Dino,00:16:04,"a variance. and I'll put." | |
| Hamza Farooq,00:16:08,"I think there's so many layers that are used to create your output that it is not consistent in generating that output for you. I know it's a it's. It's not a great answer, but the wait, the way weights work in Llms." | |
| Hamza Farooq,00:16:24,"They do not. They're not consistent." | |
| Hamza Farooq,00:16:27,"and they have so many. It goes through many lists that it almost 100. You are determined not to get the same same response." | |
| Hamza Farooq,00:16:39,"Loom. explain you in a different way." | |
| Dino,00:16:42,"No, no, II get it. Does it have to do with like the attention? But is it? Is it has to do with the attention." | |
| Dino,00:16:49,"But you're more than goes deeper into the deeper" | |
| Dino,00:16:54,"play use of the network." | |
| Hamza Farooq,00:16:57,"you know. I'm really losing your voice. I'm so sorry." | |
| Hamza Farooq,00:17:01,"Yeah, your voice is really breaking, is it? Is it just me who can't hear? Thanks for the invite? But" | |
| Hamza Farooq,00:17:14,"but yes let me" | |
| Hamza Farooq,00:17:17,"let me think more about it. You know I will. What I'll try to do is I'll send you I'll look a little deeper into it and send you some thoughts on that. Okay." | |
| Hamza Farooq,00:17:29,"it's a very good question. Can you all see my screen? By the way." | |
| Hamza Farooq,00:17:34,"okay, it's a very good question. It's very thought, provoking question. I just want to make sure that II get to a good answer of why the weight distribution. How does it affect that? You do not get the same answer. I know there's a probabilistic manner, and because so much corpus has been" | |
| Hamza Farooq,00:17:49,"given to it. So as you go through different layers. There is some change in what happens, and how it adapts to your answer, something to do with the self attention" | |
| Hamza Farooq,00:18:01,"I will get to the bottom of it and try to answer, based on that." | |
| Hamza Farooq,00:18:08,"Did we lose enough? Okay." | |
| Hamza Farooq,00:18:12,"alright." | |
| Hamza Farooq,00:18:14,"thank you." | |
| Rahul,00:18:17,"So maybe there is a case to be made that maybe it is better to be so that it's probabilistic. Because, let's say I'm giving a prom that you reply like a doctor, or you you reply like a lawyer, and in actuality you are none of that, you know. It's just an AI assistant, you know. So maybe you know" | |
| Rahul,00:18:35,"some something to be said that there are inherent dangers in in that." | |
| Hamza Farooq,00:18:41,"I think they are right. did you all read about the air Canada Chatbot thing?" | |
| Hamza Farooq,00:18:48,"Okay, so air Canada Chat Port said to the customers, we will refund you money." | |
| Hamza Farooq,00:18:56,"And now they're saying, Air Canada saying, you should look it up with somebody wanna put put the link to to the conversation, to to that. But what they did is" | |
| Hamza Farooq,00:19:07,"they had a chat the like a Gp. Chat, bot! Talk to the customers, and it was so empathetic towards their concerns because of front engineering. II would believe that they actually said, Oh, we will provide you with the refund." | |
| Hamza Farooq,00:19:23,"And now" | |
| Hamza Farooq,00:19:24,"saying, Oh, it wasn't that. It was a machine that did it. So then there was a case, and I think now they're gonna comply with it, and they're good. They're going to give back money to customers, which was not part of their policy, actually, but because the AI was representing them. So these mistakes will be done all over again." | |
| Hamza Farooq,00:19:45,"And this is honestly the funniest thing that you can imagine that okay, based on. It happens. because they" | |
| Hamza Farooq,00:19:55,"absolutely so is ei liable for the answers and the things that you've done." | |
| Hamza Farooq,00:20:01,"and most importantly, the things that you do not allow it to do." | |
| Hamza Farooq,00:20:05,"And this is what you call the the probabilistic manner of the Lm. That it just decided, because if you do, if you continue to use chat open. Ei, just like this. This is what's gonna happen to you" | |
| Hamza Farooq,00:20:17,"because open air when you get it, you do not get a dedicated version to yourself, so you can't tinker a lot of things associated with it unless you you can fine tune some things from it. You can. You definitely can." | |
| Hamza Farooq,00:20:30,"But this is this was too too funny to to to be to to be passed on." | |
| Hamza Farooq,00:20:37,"II can argue that because the Chatbot response as well like linked to a page with actual period, even should know. So" | |
| Hamza Farooq,00:20:47,"it was. It was just interesting. Was was to promise to update the chat and offer Mobile to a 200 coupon to feature. Use" | |
| Hamza Farooq,00:20:55,"anyways." | |
| Hamza Farooq,00:20:56,"Ivan." | |
| Ivan de Souza,00:20:59,"Oh," | |
| Ivan de Souza,00:21:02,"I would like to ask if this case is late to demonstrates the the importance of the final tuning. Because" | |
| Ivan de Souza,00:21:13,"if we" | |
| Ivan de Souza,00:21:15,"have. I specialize alien model." | |
| Ivan de Souza,00:21:20,"will you avoid this kind of mistake? Right?" | |
| Ivan de Souza,00:21:26,"this discussion is similar. To have a virtual solution is specialized in a specific subject." | |
| Hamza Farooq,00:21:36,"So this is what I'm gonna cover next is local hosting versus, you know, self hosting, and there are also concerns over there." | |
| Hamza Farooq,00:21:45,"So let's let's just say I'm going to call it a local. Lm." | |
| Hamza Farooq,00:21:54,"and let's call it a community serve?" | |
| Hamza Farooq,00:22:01,"Okay? So the first thing about them is when you're trying to create output, right?" | |
| Hamza Farooq,00:22:14,"They both are probabilistic. which means that they're not deterministic, and most likely" | |
| Hamza Farooq,00:22:21,"they will continue to produce different kind of outputs. My belief, like my my basic assumption. One of the major reasons why this happens like this is because of the fact that these models" | |
| Hamza Farooq,00:22:36,"have so many layers in them" | |
| Hamza Farooq,00:22:41,"that is impossible to make them consistent in delivering an answer." | |
| Hamza Farooq,00:22:47,"Okay." | |
| Hamza Farooq,00:22:49,"then the second part is, I'm gonna move this here. Okay. The second part is, let's talk about" | |
| Hamza Farooq,00:22:57,"fine tuning process." | |
| Hamza Farooq,00:23:02,"So actually, let's talk about prompt engineering." | |
| Hamza Farooq,00:23:09,"prompt engineering" | |
| Hamza Farooq,00:23:15,"design. Item." | |
| Hamza Farooq,00:23:26,"prompt engineering is basically you give a text to them,00:23:26,or a certain sort of task." | |
| Hamza Farooq,00:23:36,"So if you host it locally, it produces" | |
| Hamza Farooq,00:23:41,"a little more" | |
| Hamza Farooq,00:23:45,"consistent answers." | |
| Hamza Farooq,00:23:52,"It's a little more consistent. It's not very consistent, but it's a little more consistent because you're not. You're holding all the moving pieces of that model." | |
| Hamza Farooq,00:24:01,"Okay. Now, as you go to number 3, part." | |
| Hamza Farooq,00:24:07,"we will talk about fine tuning." | |
| Hamza Farooq,00:24:13,"So fine tuning" | |
| Hamza Farooq,00:24:15,"is a process. So what do you all think of fine tuning as like? I know there are different people with different beliefs of in fine tuning. What is the base understanding that you have, and any one of you have about what fine-tuning is." | |
| Ivan de Souza,00:24:28,"I think that we use a Britain Britain's model" | |
| Ivan de Souza,00:24:34,"in a." | |
| Ivan de Souza,00:24:36,"we view specific data base. For example, you use a custom sales data base for append companies" | |
| Ivan de Souza,00:24:46,"should train a general retainer model and based on that, you'll be able to provide" | |
| Ivan de Souza,00:24:53,"a more, a little accurate, more customize it outputs." | |
| Hamza Farooq,00:25:02,"Yeah. Now, you said multiple things. Let's let's I wanna see I want to. I have a few slides. I don't. I don't care too much about the slides. I want to get to the main part or crux of why you do fine tuning." | |
| Hamza Farooq,00:25:15,"Fine-tuning is done for task adaptation." | |
| Hamza Farooq,00:25:25,"Please remember that you" | |
| Hamza Farooq,00:25:28,"train the model for a task adaptation. For example, you wanted to do sentiment analysis, or you would like for it to generate. 3d images, or you would like for it to converse like like a chat part." | |
| Hamza Farooq,00:25:44,"So when we fine tune a foundation model to become an instruct. Gp. I'm going to write it. A foundational model foundational model",00:25:44, | |
| Hamza Farooq,00:26:00,"into instruct. Gpd." | |
| Hamza Farooq,00:26:05,"Which is the conversational Gpd. you fine-tune it. What do you do? You give it a ton and tons and tons of text, of information on how to converse?" | |
| Hamza Farooq,00:26:17,"What that does for you is that it adapts to the task" | |
| Hamza Farooq,00:26:22,"so that it has a certain format" | |
| Hamza Farooq,00:26:30,"or style" | |
| Hamza Farooq,00:26:34,"or task." | |
| Hamza Farooq,00:26:37,"It does not promise knowledge." | |
| Hamza Farooq,00:26:43,"I'm gonna write this, maybe in red." | |
| Hamza Farooq,00:26:46,"So you all remember." | |
| Hamza Farooq,00:26:48,"How do I change the color right over here right over here, you say?" | |
| Hamza Farooq,00:26:53,"Not for knowledge." | |
| Hamza Farooq,00:27:03,"Maybe I spelled knowledge wrong. II apologize, but" | |
| Hamza Farooq,00:27:07,"please remember you do not fine tune. This is not the use case." | |
| Hamza Farooq,00:27:12,"You can use to fine tune a mo foundational model to become conversational by giving it tons and tons of data. You can fine tune in Llm. To generate longer text." | |
| Hamza Farooq,00:27:27,"You can find to answer in a certain language and a style like when I say language, the way they answer, not based on the vanilla answer that you get. But basically you use that that I won't. If, when I ask you a question, I would like you to answer in this much words, or this wordy, or this style, or this attitude, or this manner." | |
| Hamza Farooq,00:27:50,"That is why you use fine-tuning of a decoder model. You do not fine tune a model, so give it new knowledge, because" | |
| Hamza Farooq,00:28:03,"it can always hallucinate on you." | |
| Hamza Farooq,00:28:07,"So there are customers who have come to me and said, Oh, why don't you feed all the company data to this model? And when you feed all that data to my company, you know, it should be able to say how many customers we have without even training the model." | |
| Hamza Farooq,00:28:23,"Right?" | |
| Hamza Farooq,00:28:24,"That is the biggest difference that people see, and it is not recommended that with that vision you should be fine tuning your Ldl." | |
| Hamza Farooq,00:28:35,"Your vision of fine-tuning an Llm. Is to come up with a task." | |
| Hamza Farooq,00:28:40,"Hey? I'm gonna use my fine tune to always be good at sentiment, analysis, or to always create a sentiment, analysis, style, the way I would like to see an output, or to only answer in emojis." | |
| Hamza Farooq,00:28:52,"That is what you fine tune in Llllm, for. You don't fine tune in Llllm to" | |
| Hamza Farooq,00:28:57,"gather knowledge" | |
| Hamza Farooq,00:28:59,"and be able to recreate that knowledge out for you." | |
| Hamza Farooq,00:29:04,"That's just I just wanna make sure that that is something that you're able to cover. I know it's controversial to say that. But in my experience of working with these things, that's what I've learned." | |
| Hamza Farooq,00:29:33,"Let's start with Luca, you had a question." | |
| luca.carangella,00:29:35,"Yeah, so so I'm trying to still understand. So if I have a foundational model, and I want and I have some some data. and I want my model to to learn from my data in that case is a rag system, a rag model. Yes, yes." | |
| luca.carangella,00:29:37,"Do I also need to fine tune that to teach the model how to use my data as a task." | |
| Hamza Farooq,00:29:42,"Yes, you can. You can. So you basically will give it examples that if I, when I give you this much context" | |
| Hamza Farooq,00:29:49,"I would like for you to generate an answer which looks like this." | |
| Hamza Farooq,00:29:54,"So you basically give some information to the model. In order to understand that." | |
| Hamza Farooq,00:30:02,"And that's where we come with something call. Also." | |
| darshil,00:30:07,"I think, the example that I'm going to show you will make it very clear where he is finding" | |
| Hamza Farooq,00:30:24,"so and I'll just cover that. Luca. Did you have any more questions?" | |
| Hamza Farooq,00:30:28,"No, no, thank you, Dino." | |
| Dino,00:30:32,"Hi, yeah. I just wanted to give you an interesting scenario." | |
| Dino,00:30:37,"We had a bunch of cowboys in our company that decided they want to fine tune everything" | |
| Dino,00:30:42,"that they fine tuned for a cue and abop, and spent about 3.5 million dollars." | |
| Dino,00:30:48,"We did the same exact application. QA. Bragg against our company docs, we spent almost $2,000. Yeah." | |
| Dino,00:30:57,"but there's a lot of you know. People hear fine tuning. That's like the first thing they wanna run into doesn't sound sexy, you know. But then, like you mentioned, I think so important task specific" | |
| Dino,00:31:08,"like, you need something unique that you that the M. 11. How was trained doesn't know right, like specific lingo for your that's terminology, for your industry, or for your company or content structure. Things like that that the Lm. Never, you know." | |
| Dino,00:31:23,"very hard for the Lm. To generate." | |
| Hamza Farooq,00:31:26,"Yes, exactly. So. That's that's so true, right? And we'll also delve into some of the nuances of fine-tuning on. Where does fine-tuning really comes into play, you know. Eventually, when you look at a deeper level?" | |
| Hamza Farooq,00:31:38,"I will. I will cover that also. Ivan and Benita. I just need 2 more minutes to cover a couple of more topics, and then we'll. I'll open the question. Okay, so" | |
| Hamza Farooq,00:31:49,"you have 0 short example, 0 short example basically means that." | |
| Hamza Farooq,00:31:54,"write me" | |
| Hamza Farooq,00:32:00,"a poem about Big Bang. Okay." | |
| Hamza Farooq,00:32:07,"Big Bang, sorry." | |
| Hamza Farooq,00:32:19,"That's an example that you write me up about Big Big Bang theory, right? That is your" | |
| Hamza Farooq,00:32:25,"1 0 shot that you do not give any example. You just say I want you to give me an output. That's your prompt engineering. In the few short example." | |
| Hamza Farooq,00:32:35,"in the few short example. what we do is, we say. I will give you an example." | |
| Hamza Farooq,00:32:51,"For instance. I will say I will write you a sentence." | |
| Hamza Farooq,00:33:06,"and your output would be a smiley face. a sad face." | |
| Hamza Farooq,00:33:13,"Or maybe you will do across analytic. These are the 4 options that you will give." | |
| Hamza Farooq,00:33:19,"So basically." | |
| Hamza Farooq,00:33:22,"what we are doing is that we will give them, we will give examples of exact form of output that we need. That is your few short example." | |
| Hamza Farooq,00:33:33,"You give an example of this is how I want you to answer. This is the format. I would like you to say so. If you have, the weather is great, it will say, you will basically say." | |
| Hamza Farooq,00:33:46,"Give me a a, a, a smiley face, and then you will give give an example. Another sentence, let's say, and you say sentence number 2 should be based on something else. So it will. Basically, let's say, this is sentence one. Then you do. Sentence 2. You know I am" | |
| Hamza Farooq,00:34:05,"so upset" | |
| Hamza Farooq,00:34:08,"the output. For this is" | |
| Hamza Farooq,00:34:12,"a frown." | |
| Hamza Farooq,00:34:14,"That is an example that you're giving your Gpd as a few short example that's called in context learning" | |
| Hamza Farooq,00:34:23,"this whole process basically says you have 0 shot. You have a few shot" | |
| Hamza Farooq,00:34:28,"when you take your few shot" | |
| Hamza Farooq,00:34:31,"into 10,000 examples" | |
| Hamza Farooq,00:34:34,"and force your model to continue to learn, continue to learn from it." | |
| Hamza Farooq,00:34:39,"That is what is called find to me" | |
| Hamza Farooq,00:34:44,"right? 0 shot. Give me an answer on this" | |
| Hamza Farooq,00:34:49,"few short. Here are 4 examples. How about how you can use this to make an output for me?" | |
| Hamza Farooq,00:34:55,"The last part is" | |
| Hamza Farooq,00:34:57,"fine-tuning, where you actually say, Hey, I would like you to take all these examples, 10,000 examples that I've given you, and I would like for you to create an output for me, but sort of informs me whether you you have completely changed your weights, and you now are a different model from what we started off" | |
| Hamza Farooq,00:35:18,"for 0 shot, and for few short, the model weights do not change, the model does not change at all." | |
| Hamza Farooq,00:35:25,"but when you fine tune, the model weights and the model architecture itself also changes so that it becomes more tuned towards what you are doing." | |
| Hamza Farooq,00:35:34,"Now I'm going to take questions. Let's start with you. Benita, and then I will go to Ivan." | |
| Benita Houston,00:35:41,"You answer the question, cause I was gonna ask if fine tuning was changing the weights. You just answered it. Yes." | |
| Hamza Farooq,00:35:52,"Ivan." | |
| Ivan de Souza,00:35:55,"if I would like to apply some constraints on the model, for example." | |
| Ivan de Souza,00:36:01,"If you have a supermarket that would you like to use Lln to provide less recipe recommendation for customers." | |
| Ivan de Souza,00:36:12,"I would like to avoid that. This super. This supermarket app includes a poison as part of the recipe, because these recommendation we use all the products available on the supermarkets in this case." | |
| Ivan de Souza,00:36:28,"where we need to apply some constraints on the model." | |
| Ivan de Souza,00:36:34,"we use the penituni approach or crack approach." | |
| Hamza Farooq,00:36:40,"If you do not want to hallucinate, you should use the rag approach because you have more control over the hallucinations." | |
| Hamza Farooq,00:36:47,"But if you want your thing to answer in a certain format, in certain language, in certain style, then you have to fine tune it." | |
| Ivan de Souza,00:36:55,"Okay, great." | |
| Hamza Farooq,00:36:57,"right. Null. Do not. Do not imagine that you can use just finding your model to create a knowledge because it can always hallucinate, and you will not know whether what's right and what's wrong." | |
| Hamza Farooq,00:37:07,"But if you want the style and the attitude, and the manner and the approach, and the way it answers, yes, you can fine tune it." | |
| Ivan de Souza,00:37:14,"Okay. So summarizing final tool is is more related to the format." | |
| Ivan de Souza,00:37:19,"the output in the right approach about the content or the" | |
| Ivan de Souza,00:37:26,"the knowledge domain that you apply on the model. Right?" | |
| Hamza Farooq,00:37:33,"That's the idea" | |
| Hamza Farooq,00:37:36,"when you look. Agitated." | |
| Wim Veninga,00:37:46,"No, I'm not sorry. Alright. So, Mars, you have a question." | |
| Maaz Amjad,00:37:51,"yeah, so I think, you know, also great question. So and and also Ivan. So the question is, how can we minimize a balance between like how we can like sustain a balance between hallucination versus the knowledge that we want to extract. So, for example, you mentioned, if you, if we find you in a model." | |
| Maaz Amjad,00:38:13,"It does not mean that the model will not hallucinate right? So what are the in your experience? What are the the possible ways that we can minimize like hallucination." | |
| Hamza Farooq,00:38:25,"You should improve your retrieval system." | |
| Hamza Farooq,00:38:30,"and you should always measure your rack performance." | |
| Hamza Farooq,00:38:36,"So in the last class jeanette mentioned something called Raga's, which is evaluation of Llms." | |
| Hamza Farooq,00:38:44,"You should focus on a so you should create what you do is you create a training data set" | |
| Hamza Farooq,00:38:49,"or a golden data set. And you and you check the performance of your model through that perspective. Listen, I'm not saying it will not have that knowledge, it might have that knowledge. You just don't know how it will use that knowledge." | |
| Hamza Farooq,00:39:06,"I'm going to repeat it. It might have some knowledge. It will always be better to have some knowledge. Yes, that's great, but it will not always. If it's always right. I don't think so" | |
| Hamza Farooq,00:39:17,"like. If, for for instance, Ivan, you run a you run a so a supermarket, and every day you find you in the model with all the transaction data. Right? So if you were to say how many items of this mouse? You know this Apple mouse did we sell" | |
| Hamza Farooq,00:39:35,"today? And it will say you sold 5. But you you said. I'm a supermarket dude. I don't sell them amongst." | |
| Hamza Farooq,00:39:45,"And you say, if you say I want you to tell me how many mouse the apple mouse that I sell, and I want you to make sure that you answer me" | |
| Hamza Farooq,00:39:52,"with a number. Guess what happens." | |
| Hamza Farooq,00:39:55,"It will still give you an answer because you pushed it to give you an answer." | |
| Hamza Farooq,00:40:02,"So this is where it comes in, and as opposed to brag, because it's returning you the source, this ground truth where it is coming from." | |
| Hamza Farooq,00:40:12,"That's a major difference." | |
| Hamza Farooq,00:40:15,"So as what Dino said previously, do not spend too much money on fine tuning until you have" | |
| Hamza Farooq,00:40:22,"run through all the different ways of doing stuff, and you are at the last resort" | |
| Hamza Farooq,00:40:30,"to actually get to a part where you like. Oh, I feel comfortable." | |
| Hamza Farooq,00:40:34,"I know. Do you know, has another question can use fine? Can we use fine tuning and drag as hybrid, too? Yes, you can. Of course you can it. It comes down to how you want to utilize them, how you want to utilize the product and how we are built out out of that. That's something that I feel is very important to mention over here." | |
| Hamza Farooq,00:40:51,"It really depends on your use case. Air Canada is a great example. I don't know why on earth but did created a product like that. But they did. And look what's happened, and they will be more and more examples of this until they finally get to a point. But some research will come up that how to reduce hallucinations within an Lm. Even our company. We are working on that" | |
| Hamza Farooq,00:41:14,"how to reduce it? Hallucination on that path, Ivan" | |
| Ivan de Souza,00:41:18,"in this process to apply funny, Tony, in a raga rag approach. Is there a metric that we should measure that we should follow." | |
| Ivan de Souza,00:41:31,"to understand. How could we develop a farmers" | |
| Hamza Farooq,00:41:37,"say it again" | |
| Ivan de Souza,00:41:39,"in this process, Pani tuning Iraq?" | |
| Ivan de Souza,00:41:43,"Is there a specific metric" | |
| Ivan de Souza,00:41:46,"that we need to measure. to understand the model performance?" | |
| Hamza Farooq,00:41:53,"Yes." | |
| Hamza Farooq,00:41:54,"yes. So I think as we go open up as I invite Dashl to give a talk to take you through on fine tuning a mistrial he'll mention. He'll talk to you about some of the metrics." | |
| Hamza Farooq,00:42:06,"and if not, I can, II will also try to cover them in. You know, as we go." | |
| Hamza Farooq,00:42:11,"there is essentially at the end of the day it really depends. So if you want to do retrieval so for Rag, you want to look at retrieval, at K or precision at K for for retrieval for generation. You want to look at the Robas." | |
| Hamza Farooq,00:42:26,"Raga's is really good. I would say that if you focus on Raga's you'll get a lot of good answers based on that. I'll take one last question from Dino, and then we'll move on" | |
| Dino,00:42:35,"the other. One thing, too, we realize also, with the fine tuning" | |
| Dino,00:42:40,"experiment we did. It's you're in it for life. Because, like, we found that now, you have to start" | |
| Dino,00:42:46,"modeling model drift, because if your data keeps. If you have very fluid data." | |
| Dino,00:42:51,"any data changes over time or time" | |
| Dino,00:42:54,"result, you have to make a decision. But when do I have to fine tune. So we're finding out that you know, you also have to also keep up." | |
| Dino,00:43:03,"you know, with the data and figure out, you know, do I? Fine tune every month is every 2 months, and you know that's cost and etc., etc." | |
| Dino,00:43:09,"Just so. You know the lesson that we hit that. It's was it just fine tuning once, and we walk away?" | |
| Hamza Farooq,00:43:16,"Yes, 100%. The the data drift in this is gonna kill you." | |
| Hamza Farooq,00:43:20,"And this is where Gpd. Is like finding is like, you know, you know, that I feel like Gp. Chat. Gpd. Has his own moves, you know, like you start seeing a trend." | |
| Hamza Farooq,00:43:31,"So the way you know that they have updated the model because everyone's post, they start having some sort of the same wordings empowered enance. Come, join us." | |
| Hamza Farooq,00:43:42,"You start seeing those were new words come in to change it. That's both when they updated their own version. And you're now accessing a newer way of thinking, but for a company level. You also need to understand that, is it both the roi of doing that, at the first place, in in the first place." | |
| Hamza Farooq,00:43:59,"alright. So what is fine tuning, fine tuning is necessary to optimize language models for specific task or domain by exposing them to task specific data, improving the performance, aligning them with the desired outputs." | |
| Hamza Farooq,00:44:10,"There is there is something like a middle ground. You can say that there is prompt tuning." | |
| Hamza Farooq,00:44:17,"prompt tuning, I would say, is more along the lines of feel short. I don't really agree with the exams that's given, but there is model tuning, which is fine-tuning completely. There is prompt tuning, which is taking some of the part of the measures, and, you know, converting them into added additional weights." | |
| Hamza Farooq,00:44:37,"not additional waste, actually, just changing a few widths." | |
| Hamza Farooq,00:44:42,"So that you know you have some prompts and then prompt engineering basically is just very, very temporary." | |
| Hamza Farooq,00:44:51,"I don't think we need, so I've taught you what few short is. I've taught you what z 0 shortest prompt tuning is more towards a few, a few short example, you actually give it a better way of explanation." | |
| Hamza Farooq,00:45:07,"I wanna talk cover one more topic. And I'm gonna give it to Tashl after that right? Right? Right after that. I'm gonna introduce you to a form of fine tuning. So fine tuning initially involves" | |
| Hamza Farooq,00:45:20,"completely changing all the weights like, you know, it's you're open to changing any level, any layer of the weights so that you can basically get to a part where you're like, okay, the whole model has completely changed." | |
| Hamza Farooq,00:45:32,"That was extremely expensive to what you know mentioned, you know, 2.5 million dollars, and so on and so forth. And it's just insanely expensive. Intuitively. What happens over here is" | |
| Hamza Farooq,00:45:45,"let me see if there's a pointer available. Turn on. But what happens over here is" | |
| Hamza Farooq,00:45:52,"when you have a certain set of layers." | |
| Hamza Farooq,00:45:59,"Right?" | |
| Hamza Farooq,00:46:01,"What this machine this product does is that it changes, or this method does. It just changes only the top layers" | |
| Hamza Farooq,00:46:10,"of the Lm. It does not convert all of them." | |
| Hamza Farooq,00:46:15,"so the advantage of something called parameter efficient, fine tuning theft" | |
| Hamza Farooq,00:46:21,"is that it does not takes much lesser time." | |
| Hamza Farooq,00:46:32,"Number 2, it is cheaper." | |
| Hamza Farooq,00:46:38,"Number 3 is very similar results." | |
| Hamza Farooq,00:46:47,"What that means is, it will take much less a time to fine tune it. You don't have to fine tune the entire entire model. It will fine tune of the top few layers of your model." | |
| Hamza Farooq,00:46:57,"Well, the second part is that it's much cheaper. You can run it very, very fast. Sometimes running it faster and cheaper are correlated to each other. Excuse me because you're running your Llm. For a much less time." | |
| Hamza Farooq,00:47:12,"because it is comp less compute, intensive, and it requires lesser time, because you're not changing all the levels." | |
| Hamza Farooq,00:47:19,"and it also provides you very, very similar results. So you can do short experiments. You basically use it for short experiments, so that in those short experiments you're able to see the change in performance." | |
| Hamza Farooq,00:47:33,"So what is peft, peft, parameter? Efficient fine-tuning methods enable efficient adaptation of pre-trained language, model to various applications without fine-tuning all the models, parameters" | |
| Hamza Farooq,00:47:46,"fine-tuning is usually very costly pep method is only fine tuning, a small number or extra model parameters, thereby greatly decreasing the computational storage cost." | |
| Hamza Farooq,00:47:56,"One of the great examples is something called queue Laura. I I'm not gonna explain you what queue Laura math is. But I'm gonna explain you one thing which is called quantization." | |
| Hamza Farooq,00:48:08,"So in the process of quantization, what happens to models is and I'm gonna come back to this thing." | |
| Hamza Farooq,00:48:17,"So the concept of quantization" | |
| Hamza Farooq,00:48:24,"is to change the size of the model." | |
| Hamza Farooq,00:48:33,"When you change the size of the model, it basically, you know, you switch the model from 64 bit to 32 bit, or to 4 bit, or to 16 bit, or what it change, you change to" | |
| Hamza Farooq,00:48:44,"the the size of the model, which makes it smaller. It becomes a smaller model that you can host. and you don't have to convert or change all the day. So Q. Laura" | |
| Hamza Farooq,00:48:59,"has the first word called kind of quantization. So intuitively what you need to understand, you are changing the bits" | |
| Hamza Farooq,00:49:06,"of the model." | |
| Hamza Farooq,00:49:08,"So the size is smaller." | |
| Hamza Farooq,00:49:10,"It's a so distant version of the original model, and you're still making changes to it. And when you push the model it has those outputs created" | |
| Hamza Farooq,00:49:22,"so that it can be used as a fine-tuned version of your model" | |
| Hamza Farooq,00:49:28,"I don't want. I'm gonna" | |
| Hamza Farooq,00:49:31,"try to stay away from the extreme technical details of it. But just imagine." | |
| Hamza Farooq,00:49:36,"quantization is changing the the the bits. Let's say you go from 64 bit to 4 bit." | |
| Hamza Farooq,00:49:45,"And when. So you're getting a portion of the model. You're getting a very reduced size of the model. And there are techniques to how it is done." | |
| Hamza Farooq,00:49:54,"But you basically essentially make the model as a representation. And you basically tested, oh, how well my model is performing." | |
| Hamza Farooq,00:50:04,"So in the quantization process" | |
| Hamza Farooq,00:50:07,"there is a great technique or a metric which is used to perform the test. The performance of the model it's called perplexity." | |
| Hamza Farooq,00:50:20,"and that's the word. That's where the word perplexity.ai got generated from is when you are testing the performance of the models at different levels of quantization. Let's say you take a model, you convert it into 2 bit" | |
| Hamza Farooq,00:50:31,"will it perform the exact same level that you would like it to perform when I say, perform the output that it generates? Is it significantly good? Or it's a similar comparable." | |
| Hamza Farooq,00:50:42,"So usually, what we see is that if you convert into 4 bit in or 8 bit, it is able to generate similar outputs to the full size models originally." | |
| Hamza Farooq,00:50:54,"and they measure through one of the techniques which is called perplexity to measure that" | |
| Hamza Farooq,00:51:03,"alright. Before I move on to dashboard. I want to say I'm gonna ask, is any does anybody have any questions for me so far?" | |
| Hamza Farooq,00:51:13,"Hello." | |
| Hamza Farooq,00:51:15,"I have one question when, like, we quantize a model does the weights change, or like they remain the same? The weights change. I think there's a smaller representation of that. Imagine. Like you're taking some parts of the model intuitively, you're just taking some parts of the model. The weights remain the same." | |
| Hamza Farooq,00:51:32,"the model sort of remains the same. But there's some omissions. So" | |
| Hamza Farooq,00:51:37,"think of compression." | |
| Hamza Farooq,00:51:42,"Okay? And compression might not always represent the same thing. Route." | |
| Rahul,00:51:48,"Yeah. So even within the rag framework. You know there are some refinement strategies, as if a as if you can send it to many layers of Llm. Even though I understand your you know cost is going to increase the compute cost and whatnot. But there are other ways to improve the model as improve the part." | |
| Hamza Farooq,00:52:08,"Yeah, so the retrieval model is completely independent of the Llm. Itself. Right? It's it's on the encoder side." | |
| Rahul,00:52:14,"So you but then you can use a simpler, like a quantized version to generate the description output." | |
| Hamza Farooq,00:52:22,"So on the decoder side, you do have you know the options to refine your results as well. Yeah." | |
| darshil,00:52:29,"Yeah. So I just wanted to add something else that while you're talking about weights in with respect to quantization, what actually happens is there's a change in precision." | |
| darshil,00:52:41,"So it's something like, for example, if you ask me, how much money do you have in your bank account? And I say it's $100. But if you ask the same question to Hamza, and Hamza says that I have $100 and 58 cents. So that's the precision. So it's kind of a trade off between precision with the model size and computation." | |
| Tooba Ahmed Alvi,00:53:04,"Awesome. Thank you. Thank you. Alright, thanks." | |
| Hamza Farooq,00:53:07,"Terry." | |
| Thierry Damiba,00:53:09,"Hey? I have a question about rag and general not necessarily exactly on this topic, but I saw that Jem and I was just released with a 1 million token length, and they've even talked about 10 million in research. You you think that's that's garbage." | |
| Hamza Farooq,00:53:26,"So here's the thing, the way transformers are made, unless we unless I'm smoking something which I hope. But the re, the thing about. So" | |
| Hamza Farooq,00:53:38,"remember, remember, sequence to sequence. Models always had large context, length." | |
| Hamza Farooq,00:53:44,"sequence to sequence models can take any context length." | |
| Hamza Farooq,00:53:48,"Self-attention is computationally intensive because it reads everything in parallel." | |
| Hamza Farooq,00:53:57,"The long range dependency is the biggest thing that brings the change in when you're using. From Rnn. To these models" | |
| Hamza Farooq,00:54:07,"dig. You can read whatever length you want, but it's go. What the math but math has shown is that it is. It tends to forget everything that happened in the middle." | |
| Hamza Farooq,00:54:19,"It tends to only remember things at the start and the end. Because" | |
| Hamza Farooq,00:54:23,"if there's a quadratic compute, you know. So you have worked on of. N, right?" | |
| Hamza Farooq,00:54:29,"Yeah. So in the off end, when we are looking at these models and these you know, when you're as you read every every word." | |
| Hamza Farooq,00:54:38,"It is a quadratic compute. ON. Of Q." | |
| Hamza Farooq,00:54:42,"So imagine the kind of computers is needed to get to 100. You know, 1 million context length. It is. It will break your system." | |
| Hamza Farooq,00:54:52,"So there is some techniques that they have done." | |
| Hamza Farooq,00:54:55,"I don't think it is really good in the center. I'm gonna say that I don't think it's really great" | |
| Hamza Farooq,00:55:02,"it remembering all the differences. If you give it like" | |
| Hamza Farooq,00:55:06,"1 million different things right. The first row is 1 million. The first token is the first, the second donor, and you go from one to 1 million, right? So in the center where you are." | |
| Hamza Farooq,00:55:18,"It might omit a lot of those things." | |
| Hamza Farooq,00:55:22,"and that's what you have seen about them. This kind of paper also came out 6 months ago." | |
| Hamza Farooq,00:55:29,"So I'm yet to test and yet to know. So you know, there's a lot of smoke screens right in like" | |
| Hamza Farooq,00:55:35,"chat. Gpot came out right. and then they screwed up the first dim." | |
| Hamza Farooq,00:55:41,"and then the Gemini, or whatever the thing came out, you know, was of a multimodal, and it turns out that they had some front engineering running and running in the background which they did not talk about." | |
| Hamza Farooq,00:55:51,"So eventually you will realize that there is always something behind the screens" | |
| Hamza Farooq,00:55:58,"just for" | |
| Hamza Farooq,00:56:00,"bumping up, you know. stock price or something. I do believe that there should be something great coming up. I don't know if this is yet" | |
| Hamza Farooq,00:56:11,"all right. Any other questions." | |
| Hamza Farooq,00:56:14,"And if not, we're gonna switch to doshell" | |
| Hamza Farooq,00:56:17,"So, Dasha, do you wanna introduce yourself? Give a background about yourself, and then we will. You can go ahead and present." | |
| darshil,00:56:25,"Sure. Sure. Yeah. So Hello, folks, I'm and I'm currently pursuing masters in computer science from Santa Clara University. Overall, I have around 4 years of experience with AI computer vision, and it will be machine learning. And Hamza was my instructor at Santa Clara University." | |
| darshil,00:56:45,"Well, he taught me pattern recognition and machine learning, and that's where we met, and from there I've like continuously been learning a lot from him" | |
| darshil,00:56:54,"every day." | |
| darshil,00:56:56,"So let me start" | |
| darshil,00:56:59,"alright. yeah." | |
| darshil,00:57:03,"So tell us about what you're gonna cover and then give details about it. And some, you know. Then we'll take it. Okay. So today, basically, what we are going to cover is fine-tuning demonstration of Mr." | |
| darshil,00:57:19,"So we are going to use hugging face libraries here hugging face trainer hugging face data loader. And eventually we are also going to push the model on hugging face, and we are also going to see how we can test our fine tune. Install model." | |
| darshil,00:57:36,"Yes, and" | |
| Hamza Farooq,00:57:38,"go ahead. good! If you can share your screen." | |
| darshil,00:57:42,"Chuck? Sure. Okay." | |
| darshil,00:57:45,"er please. I hope you guys can see my screen" | |
| darshil,00:58:04,"fine. So can you guys see my screen?" | |
| darshil,00:58:08,"Okay, okay, cool." | |
| darshil,00:58:10,"So first of all, let me show you a data set on which we are going to find you. And this is where, like, just give us a quick brief on, when should we use fine tuning? So this is something in regards with that. So here, what we are trying to do is we have a dataset which is kind of a question and answer on code base of a game called Enlighten." | |
| darshil,00:58:34,"So the question answers, seem something like this, so it's it's a Csv file. So the first column is class name." | |
| darshil,00:58:43,"which is some part of the code. So let's say it's based attack. Dot Cs. the question is, what is the purpose of these attack class? And here's the answer. There are also some more complex examples, like" | |
| darshil,00:58:58,what is the code used in this manager to set the initial state. And then you get the code. So this is kind of a used case where we are trying to make the model answer or write code or explain the code base of of a particular game. | |
| darshil,00:59:13,"So this is one of the unique use cases where fine tuning would help, because this is not something which this trial would all have already have been trained on. And this might be the case. If you are trying to create a documentation, one of the wonderful skills that I can come up with is, let's say you have your own Api built for something, and you have a huge documentation. Now, the problem is, as a developer. If I want to use that Api" | |
| darshil,00:59:40,"first of all, I'd have to go through the entire documentation to learn. What's what's in it. And how can I call that Api and things like that? But if let's say there is a I chat bought something like Mistral. Which can, you know? Just if I ask it, that I want to use this Api to perform this, this, this, and it gives me the code base, or it explains me how you can get started with. Then how wonderful would it be" | |
| darshil,01:00:06,"so?" | |
| darshil,01:00:08,"let me let me start with the" | |
| darshil,01:00:11,"code base. It's a collab notebook. And" | |
| darshil,01:00:15,"first of all, what we are going to do is we are going to." | |
| darshil,01:00:20,"they find few paths. So this is the train path, test path." | |
| darshil,01:00:24,"and the model name and everything. And here I am just cloning the git repo. So the git repo is nothing but the code base that we have here." | |
| darshil,01:00:34,"So I'm going to. I'm I'm going to clone this repo. I've performed few initial steps already to keep it ready for you guys. And here I'm importing everything. So we have the repo. Here." | |
| darshil,01:00:47,"We have the data set here." | |
| darshil,01:00:55,"Yeah, if you see class name question, I'll add so." | |
| darshil,01:01:02,"and it goes up." | |
| darshil,01:01:04,"So these are few complex examples as well. And" | |
| darshil,01:01:09,"I haven't ported. I've passed. My, how do you face secret key? I hope you guys know that you can pass it from here." | |
| darshil,01:01:17,"So that's my secret, Tocon Key. And now I'm going to actually build my data that here. So I'm going to read that training path that Csv that I we just saw. And it says that there are 1 9, 1 0 number of rows. So we have 1,900 data points here." | |
| darshil,01:01:35,"And now I'm going to load the model. So here's what where we are going to use the quantization techniques. It's a bit and bytes framework from hugging face, which kind of" | |
| darshil,01:01:47,"quantizes your model into 4 bit. And I've put load and for bit through. So this is why it allows me to kind of load this model on" | |
| darshil,01:01:56,"Google Collab machine. And I'm not using collapro. It's just a pre collab with Gpu. So you can imagine how efficient it makes to load the models." | |
| darshil,01:02:10,"So it's" | |
| darshil,01:02:11,"learning the model. You see the size of the model as approximately" | |
| darshil,01:02:25,"and for the tokenizer we are going to use Llama Tokenizer here." | |
| darshil,01:02:31,"Llama 7 B. But you can use different tokenizers. But you have to make sure. One thing, that if you use different tokenizer you might have to change the a token. Eos" | |
| darshil,01:02:41,"us token for for that particular token, anyway." | |
| darshil,01:02:57,"Meanwhile, if you guys have any questions till now, I can answer that." | |
| darshil,01:03:09,"Okay." | |
| Hamza Farooq,01:03:13,"now, so can you do a re recap on the data. What you have over here?" | |
| darshil,01:03:18,"Sure." | |
| darshil,01:03:20,"so this is the dataset that we are using. Oh." | |
| darshil,01:03:28,"which is kind of a question and answer on the code base of a game." | |
| darshil,01:03:33,"The game name is" | |
| darshil,01:03:37,"enlighten, enlighten is the game name, and these are the kind of questions which are based on the code base." | |
| darshil,01:03:44,"So the the basic question starts with, what does on trigger enter method, do?" | |
| darshil,01:03:50,"What is the collier fill and what is its purpose. And as we move to some complex examples." | |
| darshil,01:03:57,"it's more kind of a code base that how? How is the field is active, initialized, and you also have the class they mentioned it." | |
| Hamza Farooq,01:04:03,"So basically, they are adding some information over here about the product. There is some information that it will have a residual memory of the product." | |
| Hamza Farooq,01:04:14,"and sometimes it does give a decent output. I think there should. You're also doing it. You'll also do a test right to see how. Yeah." | |
| darshil,01:04:22,"yes, yes." | |
| Dino,01:04:26,"I had a question. Did you find mission better, better, better at this task than, for example, github, co pilot, or quote whisper." | |
| darshil,01:04:37,"I think, github co-pilot is something that would help you to get started with boilerplate score as far as what I have used it." | |
| darshil,01:04:46,"I don't think it's comparable to this, because this is not just writing code for you, but it's also explaining things. And it's also giving you exact details about where you can find this particular function and things like that." | |
| darshil,01:05:01,"So this is more kind of a companion." | |
| darshil,01:05:04,"And you can, you can keep questioning it for for different parts of the code rather than just automatically right?" | |
| Dino,01:05:12,"Yeah. I wonder if you'd be able to. I guess if you" | |
| Dino,01:05:17,"thinking, how would you expose this, for example, if you're in like a an id in a id?" | |
| darshil,01:05:25,"Hmm. Sorry. Can you? Can you repeat your question, please?" | |
| Dino,01:05:29,"I was curious like, how would you expose access to this capability if you're working within like, let's say visual code or" | |
| Dino,01:05:38,"nothing." | |
| darshil,01:05:40,"IDE." | |
| darshil,01:05:42,"So I think, if if you want to build something that generalize that it can explain any code, then maybe you will have to fine tune it with multiple code bases. But here, what we are doing is we are just fine-tuning it with enlightened code base. So let's say, if you have some proprietary code base for your company, and you want to make sure that I it, can expand every part of the code. That's where we can use this particular use case. But if you want to make it something very generalized." | |
| darshil,01:06:11,"then I think you'll need to add more data set, more more different code based question and answer, yes." | |
| Dino,01:06:20,"and I will be able to apply this like, even if my quote base was a C plus plus. And I want to do the same thing." | |
| Dino,01:06:25,"understanding of different classes. It's lendable, or or you would have to also just docker up a" | |
| Dino,01:06:32,"a specific output and train it for that output for using it for C," | |
| darshil,01:06:37,"yeah, definitely, you will have to kind of give it more such question. Answer pairs to get familiar with your code base." | |
| Dino,01:06:44,"Thank you. Okay." | |
| darshil,01:06:48,"cool. So we have our model loaded here, and you can see they use it. It's still just 4 Gb, out of the 12 gb. and this this uses is around 40. Gb." | |
| darshil,01:07:01,"so now we will load the tokenizer." | |
| darshil,01:07:21,"Okay, so we have" | |
| darshil,01:07:23,"about" | |
| darshil,01:07:25,"line lack tokens." | |
| darshil,01:07:28,"And now we are going to move forward with fine tuning part. So this is where parameter, efficient, fine tuning or eft comes into place again. We are going to use hugging Face library here, which makes it very easy for us, and as we see, we are using Laura framework, which is a low, rank adoption." | |
| darshil,01:07:47,"So I've just defined the parameters here. Different parameters to get started with" | |
| darshil,01:07:52,"main things which you would like to make sure is the batch size if you're training it on" | |
| darshil,01:08:00,"collab. Then you have to make sure that your batch size is very low, because batch size is something where like if you. If you. If I put here 16, then it will kind of put 16 data points into one batch and load it at one particular point. So if you, if you are doing it on a very low computation, then make sure of batch. Size is very low." | |
| darshil,01:08:30,"those are the parameters, and this is the Sftd trainer from Hugging Face library" | |
| darshil,01:08:35,"which will train our model with about parameters that we just configured." | |
| darshil,01:08:45,"also one more thing to note here is that logging steps. So I've put this logging steps as 10, so it will kind of log the log. The details, I mean the loss and everything in every 10 steps." | |
| darshil,01:08:58,"These are the warm ups, warm up ratio of when you actually want to complete the warm up steps and start with the actual training" | |
| darshil,01:09:06,"and learning rate, and everything is pretty much standard." | |
| darshil,01:09:11,"So let me just start training my model here, and we will start seeing the output better." | |
| Hamza Farooq,01:09:39,"I think if you use a 100 it will definitely move faster." | |
| darshil,01:09:43,"So yeah, one thing we suggest is that if you want, you can get a 100 through the collapro." | |
| Hamza Farooq,01:09:50,"It's not guarantee that you will get a lot of units, but you do get some units. I've been. I've been doing that for a while. and it sort of gives you the option to use, you know, one of those products. But" | |
| Hamza Farooq,01:10:02,"I mean definitely not do it on a CPU have a minimum d. 4 running for you so that you can sort of train the model for that." | |
| darshil,01:10:14,"So you can see we are going to have 4 78 steps, out of which 2 are complete, and we're just running a single epoch. Once it reduce 10 steps. It will kind of log the step number and the training last year" | |
| darshil,01:10:28,"just to speed things up and just do not keep you waiting forever. I've already trained it fine tuned at once yesterday, so I'll just quickly show you that. So I did this for around 30 min, and it ran 2 58 steps, and my logging step was one. So it's kind of logging it for every step you can see training, loss and step number." | |
| darshil,01:10:52,"So eventually kind of goes up and down." | |
| darshil,01:10:55,"but then eventually it starts lowering." | |
| darshil,01:11:00,"So at the end of 2 53 we were kind of about 0 point 8 0 point 9," | |
| darshil,01:11:06,"and I stopped it at around 2, 56 steps." | |
| darshil,01:11:09,"and once you do that you can push the adoptor to your face account. So if you see, I was able to push it here." | |
| darshil,01:11:20,"Yeah. So this is now my model, which I can kind of, you know. Put it public, or I can use this with hugging face. transport transformers." | |
| darshil,01:11:33,"So this is how you can push your model to hugging face hub." | |
| darshil,01:11:37,"And now we are going to test the model. So" | |
| darshil,01:11:41,"I put them excellent. 200. And I'll ask a very simple question initially." | |
| darshil,01:11:47,"what is a computer" | |
| darshil,01:11:48,"just to see that if it generates output that actually makes sense and not just on random words. So if you see it came up with a good output, our computer is a device that can be programmed to carry out a complex set of instructions. And you can see it's kind of" | |
| darshil,01:12:02,"trying to put the context of the game again here. So it says in this game or computer device that can be used to store and retrieve data. This is what Hans, I was talking about hallucination. Since you have fine tuned it on your game code base, it somehow tries to fit that in. It somehow tries to put that context here, that what it means with respect to this game. But however, I just asked it a general question on what is a computer?" | |
| darshil,01:12:31,"And now, we are going to. So we also have a test data here. Let me quickly show you." | |
| darshil,01:12:39,"Okay. So we have over 2 lows here, 10 step 10 and step 20." | |
| darshil,01:12:45,"Okay, so this is the test data where we have kind of similar questions. But these are the questions which are not read by the by the model which are not ingested into the model. And one interesting thing to note here is that here we have" | |
| darshil,01:13:00,"Mcq pair. So it's A, BCD, 4 options. And we have the correct answer here." | |
| darshil,01:13:07,"So we are going to use this, and we are also going to ask the model to explain the correct answer." | |
| darshil,01:13:14,"So I'm kind of using chain of thoughts here." | |
| darshil,01:13:18,"If you see it's chain of thought is drawn through. And this is my prompt answer the question at the end of your response. Right? The answer like this." | |
| darshil,01:13:26,"and we also have added this chain of thought. First think step by step, so it's kind of going to explain us why it reached to this particular answer." | |
| darshil,01:13:36,"So this was my question." | |
| darshil,01:13:39,"Answer the following question at the end of the response, write the answer like this. And the question goes like this, what does the attack? What does the beast attack class do in the Unity project." | |
| darshil,01:13:49,"so all of them will answer like this, manage the best attack behavior and clear hit behavior. So it says that C is the right answer, and since we have mentioned first things step by step, it would. It would kind of explain the B Static classes. Response of managing the best direct behavior. Enter, hit behavior. It is a mono behavior component. It has blah blah blah, and then it gives us the answer. C," | |
| darshil,01:14:12,"and you see, the increase is 100%, because C is the correct option." | |
| darshil,01:14:18,"So this is how it is useful for us, because it's explaining" | |
| darshil,01:14:23,"how it reached to that conclusion. Those are one more example. What is the game? Object that uses the beast attack dot c script" | |
| darshil,01:14:30,"need to have on it?" | |
| darshil,01:14:32,"And the answer, it says, is B and A both." | |
| darshil,01:14:35,"and it's explained here. The beast needs to have a rigid body and a collider on it to be able to move" | |
| darshil,01:14:41,"the cinema machine impulse score is not required for the B's, and you see the answer is wrong here." | |
| darshil,01:14:47,"and accuracy is point 5, because" | |
| darshil,01:14:50,"one of the either of the 2 options is correct, and another one is sponge. So this is obviously because we have only trained, I mean fine-tuned or model for just 200 steps. If you, if you kind of" | |
| darshil,01:15:02,"continue with for at least few epochs, like few 1,000 steps, then it will give you much, much more refined outputs." | |
| darshil,01:15:11,"That is over." | |
| darshil,01:15:15,"Okay. so it has reached 40 steps. Let me just pause it here and see how it performs." | |
| darshil,01:15:44,"Okay, yeah." | |
| darshil,01:15:45,"So I just abruptly interrupted it." | |
| darshil,01:15:52,"And let me. So these are the" | |
| darshil,01:15:56,"more details." | |
| darshil,01:15:59,"And I'm going to push this to happen." | |
| darshil,01:16:03,"Stupid." | |
| darshil,01:16:33,"Okay, so it's pushing my" | |
| darshil,01:16:35,"morning on the hub." | |
| darshil,01:17:09,"and it gives me this URL, where my model has been pushed." | |
| darshil,01:17:16,"So since there are multiple versions, it's" | |
| darshil,01:17:20,"giving me the version history. And this is now something which anyone of you can use if it's public, I think it's public." | |
| darshil,01:17:28,"So if you click here using PEFT. And you see, it's my username attached here. So this fine tune model is something which you can directly use up from here." | |
| Hamza Farooq,01:17:40,"And now that has become a new checkpoint." | |
| darshil,01:17:43,"Yes, yes, and it's not." | |
| Hamza Farooq,01:17:45,"I mean, we're not gonna say it's the best performance, but it has a decent performance. And" | |
| Hamza Farooq,01:17:50,"I'll actually keep going, and then I'll I'll catch up on the rest." | |
| darshil,01:17:57,"So yeah, now let me ask it a bit more good question, like, what does a waste attack class do in the Unity project" | |
| darshil,01:18:09,"let me see, it gives the answer. I could not find the enlightened code base actually somewhere, otherwise we would have created questions ourselves and tested it." | |
| darshil,01:18:20,"And this code base is pretty much the same that we saw on the previous example." | |
| darshil,01:18:25,"So this something which, where we can use fine-tuning. And this is how fine-tuning can be used. All that I would say is, you require a clean data." | |
| darshil,01:18:37,"correct data and" | |
| darshil,01:18:39,"you need to keep it under training for a few 1,000 epochs, few 1,000 steps, and that will kind of solve your purpose. But make sure that our data is" | |
| darshil,01:18:49,"is something which exactly meets your use case." | |
| darshil,01:18:53,"and that will have a huge impact. Like most of the time when I don't get desired outputs. The main culprit is data." | |
| darshil,01:19:01,"Yeah, what do you have?" | |
| Hamza Farooq,01:19:04,"I think that is great." | |
| Hamza Farooq,01:19:08,"Okay, so can you? So folks, when you build something you can push it to hugging phase, it becomes available for for you to utilize it, and once you're able to utilize it, then it is a new checkpoint endpoint for you." | |
| Hamza Farooq,01:19:22,"With that. I'm gonna quickly move over. First I'm gonna pause any questions so far" | |
| Hamza Farooq,01:19:29,"for anyone." | |
| Hamza Farooq,01:19:32,"if not what we're gonna do is that we're gonna move over to 2 very interesting folks Amot. I see you have one more friend so we have invited so the context is, last week, last last or last weekend we I actually went to Atlanta" | |
| Hamza Farooq,01:19:52,"and at Georgia Tech we had a hackathon but and we gave hotel data. Set that you, some of you remember some of you have seen the hotel data set, and then we gave, Amos team an access to our Api endpoint, which is the Ares Api" | |
| Hamza Farooq,01:20:09,"which you can use to search the Internet in real time and get a sort of results back." | |
| Hamza Farooq,01:20:15,"So they built a hackathon and they landed up first amongst. I don't know thousands of students" | |
| Hamza Farooq,01:20:22,"so, but remember they did it in less than 30 h, which includes our more, our. which includes hugging face being crashed for one day." | |
| Hamza Farooq,01:20:32,"so they essentially probably had, like less than 10 h to put something together. And they did. They hadn't slept. So a more than team. I'm gonna let you guys take it from here. Please introduce yourself to everyone and then talk about the business problem like the data set and problem. Talk about the errors, Api, and talk about how you build and what you built over off." | |
| Aamod Varma,01:20:51,"Yeah, great, thank you for the introduction. Yes. So I'm computer science and math double Major. I want to see you as well. Thanks a lot. So Hi, guys, my name is Ziyu. I'm a second year computer science student at Jordite, tech" | |
| Ze Yu Jiang,01:21:11,"minoring and math as well, focusing mostly poly on a machine learning and modeling." | |
| Ze Yu Jiang,01:21:16,"And yeah, so here we're we're here today to kind of talk about our hackbound project which we named Travergo kind of influenced by a traversal Api. which." | |
| Ze Yu Jiang,01:21:27,"and second, I'll share with you guys the canva" | |
| Ze Yu Jiang,01:21:44,"and everyone see my screen." | |
| Ze Yu Jiang,01:21:52,"Yes, we can. Good. Okay, cool." | |
| Ze Yu Jiang,01:21:54,"So these are some, just a little presentation that we made. So this is our team. Ahmad and I are here." | |
| Ze Yu Jiang,01:22:00,"I think I'm not sure if suction is here with us today, but" | |
| Ze Yu Jiang,01:22:04,"he is another one of our team members first year at joint tech majoring in electrical engineering." | |
| Ze Yu Jiang,01:22:10,"We'll begin with the problem that we're encountered while working on the project was, it's that it's a very hotel focused data set. And" | |
| Ze Yu Jiang,01:22:20,"the main task given to us was to create some sort of query in which a user can put in their preferences for hotels and different" | |
| Ze Yu Jiang,01:22:29,"attributes, that they're looking for, and in which the generative AI will be able to spit out" | |
| Ze Yu Jiang,01:22:36,"a hotel. Their preference, and as well as within the city that they're looking for" | |
| Ze Yu Jiang,01:22:40,"to kind of help, optimize." | |
| Ze Yu Jiang,01:22:43,and citizens with like planning the trips and also finding the hotel that most fits their interests. | |
| Ze Yu Jiang,01:22:52,"But begin with like some problem statements the first thing would be like personalization gaps" | |
| Ze Yu Jiang,01:22:57,"and current booking platforms" | |
| Ze Yu Jiang,01:23:00,"fail to effectively address probably unique preferences make it difficult to find accommodations that truly fit their needs. I know for sure, myself included, that sometimes. What if we were to go" | |
| Ze Yu Jiang,01:23:10,"looking for specific places to visit finding a hotel that really has, like specific qualities, a certain price range, or maybe having a gym or any of these specific attributes, is pretty daunting, especially if" | |
| Ze Yu Jiang,01:23:23,"with, like some current websites that exist where" | |
| Ze Yu Jiang,01:23:27,"they help, you specifically find hotels, and there even is a help desk in which you can call them, and you can work both agent directly. However, that takes a lot of time, and" | |
| Ze Yu Jiang,01:23:37,"there's a lot of very" | |
| Ze Yu Jiang,01:23:39,"annoying details that we'll have to go into in order to achieve something" | |
| Ze Yu Jiang,01:23:44,"simpler." | |
| Ze Yu Jiang,01:23:45,"So that's one of the problems. The second one will be the time consuming research, of course." | |
| Ze Yu Jiang,01:23:51,"doing research on the hotel and actually getting very accurate responses like, of course, I think reviews are one of the things that we look at" | |
| Ze Yu Jiang,01:23:59,"within terms of judging whether or not a hotel would be viable." | |
| Ze Yu Jiang,01:24:03,"But of course that's very time consuming and going through each and every single one of them to ensure whether or not it's accurate and" | |
| Ze Yu Jiang,01:24:10,"doing that just takes a lot of" | |
| Ze Yu Jiang,01:24:12,"time on our hands. And of course, and inefficient interactions." | |
| Ze Yu Jiang,01:24:17,"So sometimes, when, say, you were to talk with help desk. There's a long wait time along with other customers." | |
| Ze Yu Jiang,01:24:24,"or guess in this case clients were also trying to find the hotel. so all in all, it's just a lot of trouble for" | |
| Ze Yu Jiang,01:24:32,"people in general, which is why we came up with our Tarot project, which? I think I'll share my screen at this point so I could" | |
| Aamod Varma,01:24:41,show the live demo as well. That's that's along with the presentation. Yes. Can you stop sharing screen. | |
| Ze Yu Jiang,01:24:47,"Yes, give me 1 s." | |
| Aamod Varma,01:24:55,"Can you guys see my screen?" | |
| Ze Yu Jiang,01:24:57,"Yeah." | |
| Aamod Varma,01:24:58,"okay, so great. So I'll take you through the features as well as the live demo side by side, so you'll get an understanding of like what what we're doing and where we're coming from." | |
| Aamod Varma,01:25:09,"essentially, I'll solve the first 2 features. First, we have sentiment announcing, you will search. So we have. We were given a data set of 150 hotels, and each of those hotels had multiple reviews for them, and we thought that and some of these some of the hotels, and have descriptions, or some of them, did not have some reviews, so we figure that the best way to query and search through all these hotels would be to" | |
| Aamod Varma,01:25:31,"put all the reviews together and then convert that into our vector and then use new search to essentially find the hotel we want. So that's what that's essentially what we did. Using your own search algorithm using sentence transformers to convert all of our other queries as well as the" | |
| Aamod Varma,01:25:49,"actual descriptions and reviews into vectors, we uploaded that into quadrant database or back to database. And then we just query, from that every time a user enters a query." | |
| Aamod Varma,01:25:57,"And after that we get. We take a background, 3 hotels from that. And then we conducted sentiment analysis on each of these hotel reviews. We gave it like a score and averaged it out. So each hotel was given a specific score based on the sentiment analysis on each of the reviews." | |
| Aamod Varma,01:26:14,"And then we rank all this the 3 hotels based on that. So I'll just give a demonstration of that for now. so I could do something like Give me a good hotel" | |
| Aamod Varma,01:26:25,"in New York with Wasai so essentially" | |
| Aamod Varma,01:26:30,"right now it's going through. It's querying a quarter database. And then these are the hotels that have figured out and what it's doing right? And the the text below it is basically a decoder model. That's just gonna explain why that specific hotel is fit for you. How that works is that" | |
| Aamod Varma,01:26:48,"we can see the decoder tomorrow, but that works is that we gave the" | |
| Aamod Varma,01:26:52,"We use open air. In this case we gave the description and reviews of the hotel. And then we gave that query that we want, and we basically just ask it, what? Why, the query is satisfied from the hotel's descriptions. So" | |
| Aamod Varma,01:27:07,"here, you could see we did 3 hotels. Let's say I'm really interested in the I could click on this button. And then it gives me like a conversation. Anthony, so essentially through this interface I'm able to like converse with" | |
| Aamod Varma,01:27:22,"in this case, GPT. 4 to ask about the specific questions about the hotel, and so on and so forth. So I guess I could do something like. Tell me more about" | |
| Aamod Varma,01:27:34,"the rooms" | |
| Aamod Varma,01:27:36,"now, depending on the question. It's gonna check with Ares Api or not. So in this case. So what I did for this demo is that every time it goes to the area, Api, I explicitly stated down below, it's going to go to S. Api. So you would understand, like how we integrated it. So right now, you don't really see anything like that. That's because," | |
| Aamod Varma,01:27:56,"the information about the rooms are already there in the description. So in the in these kinds of cases, when we know that the information is there in our data set. We don't really. It's a waste of time to actually use an Api to figure out an answer that we already know. So we use another." | |
| Aamod Varma,01:28:13,"there's another layer that goes between our actual query and the answers that it gives. And this layer basically tells us, if the question that we ask is within our data set, or if it's something that's answerable by us. Now." | |
| Aamod Varma,01:28:31,"we could ask some other questions, such as what do the Web viewers say about the hotel?" | |
| Aamod Varma,01:28:38,"So now, this is something that's also darn data, said we, because we appended all of our reviews as well as a description. If that makes sense, because when we converse with Openai, or it's able to like, give opinions of others, and because it doesn't really have opinions of itself." | |
| Aamod Varma,01:28:55,"But now let's look at some a different kind of question. Let's say we want some like, give me" | |
| Aamod Varma,01:29:01,"list of restaurants nearby." | |
| Aamod Varma,01:29:04,"Now, this is something that's we definitely don't know, like, we don't have information about this. So actually, I'll show you the actual response from Mary Cpi as well. So" | |
| Aamod Varma,01:29:14,"you can see it's going to errors the Api, and it return me a list of hotels, I mean, restaurants. So based on that information how we did it was and streamlit every time. We click on a button we enter a a query reloads the entire thing. So every time I" | |
| Aamod Varma,01:29:31,"we ask something, or we receive information from it. You can see, I explicitly stated that here when we receive information from it, what we do is we append that to our initial message. So if you look back in the context window that we are giving to." | |
| Aamod Varma,01:29:45,"We look back on our message history that we're giving to a. We append this information so essentially. this information, to the beginning and based on that. It's gonna give me" | |
| Aamod Varma,01:29:58,"It's gonna give me the answer. Now, we could do something like, how is the gym facility in the hotel." | |
| Aamod Varma,01:30:07,"Now." | |
| Aamod Varma,01:30:08,"this is not there in description, or any of the information that we have. So again, it's going to go to in this case, and then it's going to give me an answer." | |
| Aamod Varma,01:30:16,"I'll show you response from this as well." | |
| Aamod Varma,01:30:20,"That's right here." | |
| Aamod Varma,01:30:22,"So essentially, that's the main idea of our project because when we ask questions and we we want to know more about a hotel or anything. General. We don't really have. Our Gbd doesn't really have access to real time information. So it doesn't really know about the actual hotel restaurants that are nearby doesn't really know things like that to specific places. So what we did was we use the areas Api to extract information from" | |
| Aamod Varma,01:30:47,"the Internet. And then we attended that to the hotel description and the reviews, and then use that to essentially answer a question. So" | |
| Aamod Varma,01:30:55,"now another thing was. as I mentioned before." | |
| Aamod Varma,01:30:59,"we can't really do this at for every question. So if you look at the first 2 questions I asked." | |
| Aamod Varma,01:31:04,"we only know information about the rooms. So there's no really point of asking Eric, if you about this, even though you can but like" | |
| Aamod Varma,01:31:12,"there's no point in that. It's just a waste of time. So that's why we figured out a way to actually distinguish between. Then we want to ask a question to arise, Api, or when we don't want to ask a question, or when we can answer the question with our given data set, or when we have to retrieve information to answer this specific question." | |
| Aamod Varma,01:31:29,"So overall" | |
| Aamod Varma,01:31:30,"that's the main features of our thing. We have the traversal thing. So we have these most often beginning. Once again, we have the neural search to essentially list down the hotels from our data set. We have a sentiment analysis to prank these hotels that we listed down." | |
| Aamod Varma,01:31:47,"We have a decoder model. And that basically answers, why, that specific hotel that we chose? The 3 hotels that we chose are a good fit for you, and we have a. QA. Chatbot that is integrated with the traversal Api or areas Api, which enables you to ask questions about real time information as well as ask questions about what it already knows through the hotel description." | |
| Hamza Farooq,01:32:12,"Amazing. So go ahead. Anyone wanna add anything" | |
| Hamza Farooq,01:32:21,"alright. So" | |
| Hamza Farooq,01:32:24,"what does" | |
| Hamza Farooq,01:32:26,"first off folks? Thank you so much for this. This is amazing. Really happy on how you utilize the Api to use that. So before I ask any questions, or you know, Del, dive in deeper into it. Can you, anyone else at any ha! Have questions about the implementation on what they did and how they did it." | |
| Thierry Damiba,01:32:49,"Yeah, I have a question first of all, really great job on this." | |
| Aamod Varma,01:32:53,"Thank you. Awesome work. Very impressive." | |
| Thierry Damiba,01:32:55,"I'm curious. Did you have to do any work to make sure that the output of the questions was coming out the way you wanted it to." | |
| Aamod Varma,01:33:04,"So you mean using the quadrant of web to database. Yeah, exactly. Oh, wait. Are you talking about? The output of this question or output of these questions" | |
| Aamod Varma,01:33:14,"both, but feel free to talk about. Yeah. So I'll tell the first one initially. What we did was, we did unknown. We didn't really use your own search algorithms. So essentially, we just convert all these words into vectors and then up on the con what to database, and it just check that checks. That's that's word in that database, and with trees, information or hotels based on that." | |
| Aamod Varma,01:33:35,"But that's probably the best thing to do, because what's different words can have the same meanings. That's why we use a neural search using a sentence transformers. And so if there's 2 words with the same meaning, it would be embedded to the same vector and then" | |
| Aamod Varma,01:33:50,"that will be uploaded to the back to database. And we can query from that. We could search in this case. I think of, it's gonna be using like a cosign distance instead of Euclidean distance. So that's gonna be much accurate in terms of the actual distance from the data that we want in that sense. We before, when we use the normal kind of like figuring out the hotels, it was" | |
| Aamod Varma,01:34:10,"worse in terms of actually figuring out the specific things that we want." | |
| Aamod Varma,01:34:15,"But when we use the New World search for using conference transformers, it was actually pretty good. And we were able to like figure that out in terms of these questions. Yeah, this was a bit of a problem, because a lot of the thing that things that involve with the Gpd for things is prompt engineering and we need to figure out a way to" | |
| Aamod Varma,01:34:34,"convince Gpd. For that. It's like a hotel advisor and needs to answer these questions so initially. What we did was every question that we had we just gave it to Eris? Api got an answer from that appended with our descriptions, and then told the Gpd. Or answer the question based on the above information. But" | |
| Aamod Varma,01:34:54,"Now, that was the problem. So we had another layer with Gb, that's gonna say, if we have, if he can answer this question or not, and the problem with that layer was that" | |
| Aamod Varma,01:35:04,"there are, and there were instances where Gpd. Would say that." | |
| Aamod Varma,01:35:08,"but can answer a special question. It says that it knows information about the restaurants. So, for instance, in this case." | |
| Aamod Varma,01:35:15,"imagine, like Gp. Says, that it knows information about the restaurants nearby. But it actually doesn't. And then when it comes to the next model, that's gonna answer the questions about the answer, the specific question based on the data set. It doesn't know the answer, because it's not there in our data set or not. And the text. So we had to figure out a figure out a way where we can." | |
| Aamod Varma,01:35:34,"We, we can make sure 100%, or at least close like a very, with a very high confidence that Gpd either does or does not know the answer. So what this took was messing with the prompt engineering, messing with the prompt and" | |
| Aamod Varma,01:35:49,"getting it to be 100 sure telling it that you need to like, know? The answer. Things like that. And then." | |
| Aamod Varma,01:35:55,it's basically just yes or no answer. And then we based on that? We call the area safe here or not? | |
| Aamod Varma,01:36:01,"Yeah. So that's basically how we figured like, how we try to do the question bit for both. The actual hotel" | |
| Aamod Varma,01:36:09,"thing and as well as a chatboard thing." | |
| Thierry Damiba,01:36:14,"Thank you." | |
| Hamza Farooq,01:36:18,"Awesome. Martha. You had a question." | |
| Maaz Amjad,01:36:22,"Yeah. Great job. I think. Very nice presentation. So the question" | |
| Maaz Amjad,01:36:28,"so the last point you are mentioning how did you make sure that the question that was asked does not have an answer in your record databases and go to Eris Api. Did you like? Of course, you mentioned about a cosign similarity and duplicate what other approaches did you" | |
| Maaz Amjad,01:36:47,"think, applying specifically and understanding whether that question has and response invited databases? If not, then go to areas Api, and then do the rest of work" | |
| Aamod Varma,01:36:57,"right? So" | |
| Aamod Varma,01:36:58,"I'll just sort of distinction first. So the cosign distance, including distance, that distinction was in actually searching the hotels. So our program has 2 different aspects to it or or thingy. So one is the actual searching of the hotels. That's what we're doing here. It's gonna give 3 hotels. Now, this is where the we use a neural search where it uses cosign similarity to actually find the closest" | |
| Aamod Varma,01:37:22,"hotel. But this one it's it's a bit distinct in the sense that how we figured out the answer was, there is basically, we gave the entire description that we had as well as the actual question to, and basically just Austin, to make it super simple? We just asked. It is the answer to this question, there, here." | |
| Aamod Varma,01:37:42,"now, obviously, like we can't, really. You can't really, to us just ask like that simple question we had to like, convince, tell it that make. There's a specific answer to this question and replying, a yes or no, and that sense. So basically, we just use the" | |
| Aamod Varma,01:37:57,"A Gpd for again to as a layer to basically figure out if it's answerable or not." | |
| Maaz Amjad,01:38:05,"Just to understand. So you provided the your vector database and the question and asked our Gp, whether your question has an answer in that? If not, then go to Aris Api." | |
| Aamod Varma,01:38:17,"So yeah. So we provided the actual text version to descriptions as well as the reviews as well as their the query, the question, and then it would respond in 2 ways, yes or no. If it's yes. So the answer is there we continue with our normal thing. So that's basically answering the question. If it's no, we use an we use the errors. Api, we extract the information" | |
| Aamod Varma,01:38:39,"we appended back to the description or the initial text that we input it into Gpd, and then. So now the what Gpd has access to is the extra information by Api as well as a description and the reviews, and it would also based on that" | |
| Aamod Varma,01:38:55,"if anybody like wants to see the like the code, so it will make it easier just for you asking as well." | |
| Maaz Amjad,01:39:02,"Yeah, it would be great if you can share the Github link. It would be nice just to see the code. Thank you so much." | |
| Aamod Varma,01:39:07,"I'll put it in the chat. Thank you" | |
| Aamod Varma,01:39:12,"any other questions." | |
| Hamza Farooq,01:39:17,"tell us more about your experience with errors. Api, some things that work something that did not work." | |
| Aamod Varma,01:39:23,"Yeah. Sure. So in terms of the area. Cpi, I guess." | |
| Aamod Varma,01:39:27,"With everything related to Gp stuff. Various. Api, an important thing is actually making the prompt to the question. And so that was one of the initial issues that we had." | |
| Aamod Varma,01:39:40,"But, for instance, when we when I ask a question like this, how's the zoom facility in the hotel? I can't really directly put it into a recipe. So what I had to do is like, figure out the actual hotels location, append it all together, and then construct a oh question" | |
| Aamod Varma,01:39:55,"kind of such that it's able to give me a good answer. So once we figured out the actual, the prompt itself. It was actually a good experience, because every time we asked the question you would just you would I would. I would get an answer like couple of seconds. I could easily just append that into" | |
| Aamod Varma,01:40:13,"whatever I wanted to, and then I was able to get generate these answers like the hotels, etc., I could actually show you some of the previous responses. So if I go back up here." | |
| Aamod Varma,01:40:26,"I asked a question about the restaurants, and it gave me like." | |
| Aamod Varma,01:40:31,"give me a list of reference, which is. And this these kinds of outfits are super easy, super easy to like append and like use, because it's like in a list that exactly has the" | |
| Aamod Varma,01:40:42,"the restaurants where it locate where it's located and a bit about the restaurant itself, and this is super good and easy for us to work with, because I could directly use it with any of my other applications so in that sense it was actually pretty good. And then, I guess, overall everybody knows that. When you're talking to Gbd itself, it doesn't really have access to real time information. And I guess a really good answer to that question. Then, before we actually started, we didn't really know" | |
| Aamod Varma,01:41:12,"So we didn't really get like how we could actually get answers about the restaurants or the tourist attractions nearby. And I guess we went through. We'd introduce to get through this, and I guess that's like a good thing figured out like how to integrate it. And it's working pretty well so far" | |
| Hamza Farooq,01:41:32,"awesome. I think I think that is all good good stuff to hear, you know." | |
| Hamza Farooq,01:41:36,"Great work on that, and thank you, for you know, taking us through what you've done and how you built it. And you know all the things that you built out on it. Okay." | |
| Hamza Farooq,01:41:47,"Terry, did you have a question again. Yeah, another question I just had for both of them was, if you had a month or a year" | |
| Thierry Damiba,01:41:55,"to continue to work on this project with these same tools. Is there any functionality like that?" | |
| Aamod Varma,01:42:01,"Yeah, definitely. So, for instance, one of the good things that one of the things that we wanted to do is have the ability to actually get, for instance, images of the" | |
| Aamod Varma,01:42:11,"hotel. So when we're talking with the Gpd itself, we have Nick Chatbot conversation. We could ask questions about the hotel images we can ask questions about. We can ask it for links and integrate all that together. Cause that's super interesting. And another thing was that this is a bit reach. We're reaching a bit for this. So an interesting thing we could do is we could ask questions about the specific rooms. Now, because the images of these rooms are there on the Internet." | |
| Aamod Varma,01:42:39,"we could use these images. And we could extract information from these images. So the from the rooms, and it would tell, tell me about how many beds are there? How many like," | |
| Aamod Varma,01:42:50,"is there a TV and things like that? So it would actually give me information about a specific room. And like the images, basically from the Internet. Now that's super cool, because," | |
| Aamod Varma,01:42:59,"we could use Hu, like many so many images from the Internet. And like, we could have information from there. And like, put it all together. And it's it's a super good interface for the user can like get any information, basically about the hotel in a super easy and quick way." | |
| Aamod Varma,01:43:14,"So that's one of the things that we were planning on doing. But like, obviously, that this shots span of time, we can't really implement all that." | |
| Aamod Varma,01:43:22,"Yeah." | |
| Ze Yu Jiang,01:43:25,"But yeah, it. It was definitely like the ultimate goal was definitely to aim for him, more of like a multimodal experience for users, because ultimately the main issue is like many of there are many suitable and amazing hotels out there. It's just that due to the lack of information and being ill informed with, the different options out there. People for" | |
| Ze Yu Jiang,01:43:46,"booking a certain hotel, maybe." | |
| Ze Yu Jiang,01:43:48,"or they might end up choosing one that is not necessarily the best fit for them or their needs. So, by incorporating all these like different experiences in a near future." | |
| Ze Yu Jiang,01:43:58,"I know exactly what type of role that you'll be seeing, and what kind of services now they will" | |
| Ze Yu Jiang,01:44:04,"really offer is kind of like a win-win for both the customer as well as a hotel and keeping their high reputation and not being." | |
| Ze Yu Jiang,01:44:13,"I guess, tarnished by a review due to not knowing that there are actually better options available. So yeah." | |
| Ze Yu Jiang,01:44:26,"any other questions." | |
| Shreeharsha G N,01:44:29,"I think there's one more. Yeah." | |
| Shreeharsha G N,01:44:31,"Can you hear me? So great work? I would say so. Is, was there any opportunity to try" | |
| Shreeharsha G N,01:44:42,"real time? Oh, I mean like a limbs like copilot or" | |
| Shreeharsha G N,01:44:51,"barred activity. With this data." | |
| Aamod Varma,01:44:55,"we didn't really explore any of that. So I'm not really sure about how effective that is." | |
| Aamod Varma,01:45:00,"Oh, I mean, initially, we didn't really plan on using the on doing the chat, but either. But like we worked through that short period of time and managed to figure that out. But yeah, I'm not sure about the actual implementation of with our data set with that. And how real time information would work. In that sense." | |
| Ze Yu Jiang,01:45:18,"Okay? Great. Okay. Thank you. Yeah. Because Openai Api was really the first thing that came up and or popped up to mind when it comes to" | |
| Ze Yu Jiang,01:45:26,"by integrating or creating our own version of like a Chatbot." | |
| Ze Yu Jiang,01:45:31,"yeah, we haven't really had enough time to explore the other lomps. But definitely, that could be something we would look into in the future." | |
| Shreeharsha G N,01:45:40,"Yeah, thanks, thanks a lot." | |
| Hamza Farooq,01:45:42,"Alright, folks, this is amazing. Thank you so much." | |
| Hamza Farooq,01:45:47,"as this is one of our last classes, you know, as we are getting together. What I would like love to do is every one of you. If you can put your camera on, I'm going to take a picture of all of us." | |
| Hamza Farooq,01:45:56,"and that is gonna be a memory. If you don't want me to share it on Linkedin, I will not. I just wanna just wanna have a memory of you all. So it's okay. If you're having a bad hair day, I'm ha! I have. As you can see, I'm having a very bad hand myself, but" | |
| Hamza Farooq,01:46:13,"let's try to get one." | |
| Hamza Farooq,01:46:16,"you know. Show us the beach also." | |
| Hamza Farooq,01:46:21,"Alright we have Dashi left. Dashi. Can you go ahead and share your screen share your video." | |
| Dino,01:46:29,"Alright folks." | |
| Hamza Farooq,01:46:37,"Alright. Okay. So we and 1 one from here hold on. Sorry mistake." | |
| Hamza Farooq,01:46:46,"So everybody smile at 3, 1, 2, 3," | |
| Hamza Farooq,01:46:53,"awesome. Thank you. Everyone. This has been great. Big shout out to amodi for you know coming in doing this project. We'll be connecting with you again, you know, just to talk more and help you and support in what you've done over here." | |
| Hamza Farooq,01:47:11,"Leave your Linkedin. Linkedin details. So if anybody from our, you know, session wants to connect with you and learn more, they can. They can do that. I just. I'm just so surprised that in such a short time you do are able to build on such a great product for us. You know that sort of works out and does such a great job." | |
| Ze Yu Jiang,01:47:29,"Yeah, thank you, Hansa, for providing us the opportunity to do this challenge." | |
| Hamza Farooq,01:47:35,"Absolutely, I think, heads up to show us. He just reached out. He just found me S somehow, and then he reached out and it comes out. Would you be interested? As like hell, yeah." | |
| Hamza Farooq,01:47:46,"so that sort of all worked out" | |
| Hamza Farooq,01:47:49,"we are at the end of the end of the course cohort we'll have a demo day, but that's a different conversation. I just wanna say thank you to everyone who have who paid, and, you know, took the time to come and come to this course" | |
| Hamza Farooq,01:48:02,"you will. You must have received a survey about this course, you know, so that you can give your feedback. That will help us improve the course we're we have another course that is starting tomorrow, that is, on advanced Llms." | |
| Hamza Farooq,01:48:16,"Some of you have joined it. Some of you know about the course. We don't have scholarships for that course yet. It's the first time that I'm I'm teaching it. But" | |
| Hamza Farooq,01:48:26,"if you want to join, just send me a message on staff. We can. We can converse. But some of you are already in are in that course. You know, the user signed up earlier." | |
| Hamza Farooq,01:48:36,"And with that. Thank you, everyone. Thank you so much. Once again. I will always be on slack, you know. I'll try to always be on slack. I can't say always, but we'll be on slack, so reach out for any concerns you are, we are. If you're not connected on Linkedin, please send me a message on Linkedin." | |
| Hamza Farooq,01:48:52,"I love to connect with you all." | |
| Hamza Farooq,01:48:54,"and last, but not least, please make sure to leave comment." | |
| Hamza Farooq,01:48:59,"or you know anything, though, if you don't like it. Don't do anything if you really liked it, please. Cause, your comment will be visible, too, but it just it helps me grow. This is one way for me to make a livelihood. I quit Google. And I do this, it's a it's upgraded to me. This is going to be my, this was my sixth cohort for this particular course" | |
| Hamza Farooq,01:49:21,"in total. I've done 9 Co. Host since May, so I've gone. Met a lot of interesting people." | |
| Hamza Farooq,01:49:27,"Thank you, everyone, and good day, good weekend, and thank you so much. Thank you." | |
| Hamza Farooq,01:49:44,"Thank you, everyone." | |
| Ivan de Souza,01:49:45,"Thank you. Bye, bye, bye." | |