Dataset Viewer
Auto-converted to Parquet Duplicate
speaker
stringclasses
29 values
time
stringlengths
8
10
text
stringlengths
2
550
Hamza Farooq
00:00:29
Alright, can everybody see my screen?
Hamza Farooq
00:00:33
Yes. Sorry. Okay. I didn't see it. Okay. So we are the last part of our course which is dot product, prompt engineering versus fine tuning.
Hamza Farooq
00:00:46
There is some technical stuff that I will try to cover. But I there's too much technical stuff over here that I can cover in a one, you know, in one session there's a lot of math involved, but what I'll do is give you an intuition about what all fine-tuning is. And
Hamza Farooq
00:01:05
let's let's talk about
Hamza Farooq
00:01:08
where we have been so far, what? What we have covered.
Hamza Farooq
00:01:13
Move the slides up and down. Okay, so what we have done so far.
Hamza Farooq
00:01:19
we
Hamza Farooq
00:01:21
covered encoders.
Hamza Farooq
00:01:23
We covered a decoder models. We looked we had a guest speaker come in, speak about the transformer architecture and how it is built. So on and so forth.
Hamza Farooq
00:01:32
We build retrieval systems from scratch. We looked at retrieval system with BM. 25. We looked at retrieval system in different forms.
Hamza Farooq
00:01:40
there were some Api endpoints that be discussed with end of each class about using hugging using radio as an endpoint you have access to all the code the decoder models that we looked into were most more to use in a rack situation
Hamza Farooq
00:01:59
where you could call a certain Api endpoint of of product and use that. And then I think, the last us we spoke a lot about Jean
Hamza Farooq
00:02:11
janet came in and he spoke a lot about how to, you know, evaluate different forms and different Lm applications.
Hamza Farooq
00:02:22
So one thing I did is from our last course. I'll
Hamza Farooq
00:02:29
I think. You should be able to get access to that. I will make sure
Hamza Farooq
00:02:35
it's there.
Hamza Farooq
00:02:37
I think so already. So if you're going to cohort
Hamza Farooq
00:02:41
boys.
Hamza Farooq
00:02:44
you should be able to see a new module that I've put in, and this is called a overview of evaluation metrics.
Hamza Farooq
00:02:52
And if you go here you can actually see the slides that Janine presented.
Hamza Farooq
00:02:58
So you have. You can sort of go through them yourself. Take a look. I think he has done a great job in
Hamza Farooq
00:03:05
talking about it. So something I want to mention is that
Hamza Farooq
00:03:10
I would love for you all to sort of think about the benchmarks, don't you? Don't have to implement them. But think about the benchmark that you can include in the Llm. Applications or any form of applications that you built for the capstone.
Hamza Farooq
00:03:22
It's not a compression for you to do other capstone. I'm not going to push you all to think about or do it, but at least what I would love for you to do is just
Hamza Farooq
00:03:32
push yourself enough so that it's a well rounded effect of your learning. Some of you have business implications that you cannot share your business data, or you know something like that. That's not what I'm looking for. What I'm looking for is you to have implemented the knowledge that you have learned in this class into something
Hamza Farooq
00:03:51
again, and it sort of helps to, you know. Talk and discuss about it. I'll give you an idea, Bill. One of my students, Bill, he billed a really good product. That looks into Amazon reviews and recommends products.
Hamza Farooq
00:04:06
A day later Amazon also launched the product which looks exactly like that.
Hamza Farooq
00:04:11
Of course Amazon did not copy him. You know it. It would take Amazon more than a day to copy someone's hard work. But but the idea that he has the same initial sense, an intuition of what to build a day before it, you know a large company did it themselves so, and Bill had taken it further, he is doing image search also, so I think I will have him come on the demo day and present his project to you all also. So
Hamza Farooq
00:04:41
so what I would like for you all to do is when we talk about the capstone project. Just come some, if you intend to
Hamza Farooq
00:04:50
do a demo project, please submit one, and then accordingly, I'll try to either
Hamza Farooq
00:04:57
converge you with the next cohort, so you all can do a demo day together, and you have time to build one
Hamza Farooq
00:05:04
right? So it won't be like you have 2 weeks or so. You might have 5 to 6 weeks to prepare for a demo day, and that demo day will basically give you access. One more thing is I want to mention is that any future guest speakers that we will have, you will just get an invite to them.
Hamza Farooq
00:05:19
So just because I haven't had any guest speakers, you know, or I had just 2 of them this time. You can. You can. You'll get access to whosoever.
Hamza Farooq
00:05:27
Okay.
Hamza Farooq
00:05:28
So I believe this is all what we have covered so far today's class, I move to slide. I don't know why. Today's class. We wanna talk about few things which is number one, prompt engineering number 2 fine tuning Llms. Number 3,
Hamza Farooq
00:05:44
theft number 4, validation metrics number 5, code Walkthrough and fine tuning models. We'll talk about a local Llm. And II there, I've already shared this quote with you all the the Gpt version how to find you in a chat. Gp. But you can sort of pick it up from there. But we have one of my Susan who's joining in, who will be taking you all? How to fine tune a mistral.
Hamza Farooq
00:06:14
and to push that date. Hey, Dashi! And to push all that data onto hugging face like the moral onto hugging face. Also. that would be sort of also give you the opportunity to say, Hey, we built a model and we pushed it to to hugging face. Okay.
Hamza Farooq
00:06:38
okay, so what is prompt engineering.
Hamza Farooq
00:06:43
Have you all who over here has not used prompt engineering yet.
Hamza Farooq
00:06:47
or have not been exposed to in any form?
Hamza Farooq
00:06:51
And if you have, what has you your best practices been for prompting anyone
Thierry Damiba
00:07:02
definitely? Oh, go ahead.
Maaz Amjad
00:07:04
But go ahead, please.
Hamza Farooq
00:07:08
Theory. Go ahead.
Thierry Damiba
00:07:10
Oh, I was. Gonna say, it definitely depends on the use case. But most of my prompt engineering has been trying to get output that fits whatever my use case is. So.
Thierry Damiba
00:07:21
for example, I was working on a project where the tragedy would tell a story. and I had to do some messing around with the prompts because
Thierry Damiba
00:07:32
it didn't always want to tell a story. Sometimes it just wanted to answer a question, or sometimes it wanted to tell too long of a story. Things like that.
Hamza Farooq
00:07:41
Yep.
Hamza Farooq
00:07:44
yeah. So
Hamza Farooq
00:07:47
so this
Hamza Farooq
00:07:49
is problem number one. When you do, I mean.
Hamza Farooq
00:07:52
I'll explain what prompt engineings. And then we'll go into the problems front engineering involves designing and refining language models prompt to achieve specific desired outputs. It includes crafting prompts that provide clear instructions, contacts, or constraints to get the models. Response.
Hamza Farooq
00:08:07
I think 6 out of 10 times a prompt box
Hamza Farooq
00:08:14
4 out of 10 times a prompt does not work.
Hamza Farooq
00:08:17
Ii don't know if you all have have that experience. But there are a lot of times that the prompt does not perform in the way that you would like to like for it to perform. Or you know, if you're using Chat Gp, you will see. One result would say,
Hamza Farooq
00:08:31
based on the based on the based on the context given to me by you. Here are, some of the answers, and you don't want to see that you don't want to see that, and you will try very hard to get through that part, but
Hamza Farooq
00:08:44
it's there's no actual given way that will sort of say, Oh, this is the best form to achieve that
Hamza Farooq
00:08:52
right? So I would say that
Hamza Farooq
00:08:55
prompt engineering exists.
Hamza Farooq
00:08:57
This is the conversation between customer and a polite, helpful customer service agency input indicator question of the customer output indicator, response for customer service, you know. So and so forth.
Hamza Farooq
00:09:09
You would read through the language model and create an output for the, for the, for the completion.
Hamza Farooq
00:09:14
That's what you essentially do when you just use chat. Gbt, you know you say I want you to act like a Harvard Howard stew. You know, Howard, Professor, and I would like you to, answers Abcd.
Hamza Farooq
00:09:28
Or I want you to answer in such a style. That's all prompt engineering. What you're doing is that you're just giving instruction.
Hamza Farooq
00:09:38
So so some of the some of the problems with it is that
Hamza Farooq
00:09:42
prompt engineering is basically an art 10 sites.
Hamza Farooq
00:09:46
and the better you are at it. In terms of English, the better you are able. So promising seems to be difficult for some machine learning researchers. This is not surprising, because prompt engineering is not machine learning. Prompt thing is the opposite of machine learning.
Hamza Farooq
00:10:02
And what that means is, it's truly
Hamza Farooq
00:10:06
how good your English is, or the language that you are coding it.
Hamza Farooq
00:10:10
The essential part of when you work when you try to do prompt engineering.
Hamza Farooq
00:10:16
a lot of folks, even on the scall. or some of you in this call English. Is not your primary language right?
Hamza Farooq
00:10:24
That's your show of hands. How many people does not have English as a as a primary language. I want to say 1, 2, 3, I say, there are at least 10 people on the call. Right?
Hamza Farooq
00:10:36
The number one problem with people who who's English is not a primary language. You don't think I mean as good as your English is.
Hamza Farooq
00:10:46
Sometimes there are things that are not as fluent as you would like them to be.
Hamza Farooq
00:10:51
And here's the beauty. Pront engineering was built by Corpus of English language.
Hamza Farooq
00:10:59
So there's the English that was fed to that prompt or that model that would sort of learn onto becoming your context, for prompt engineering was built on English. That is spoken a lot more in us
Hamza Farooq
00:11:16
than any other part of the world.
Hamza Farooq
00:11:18
So there are intrinsic things on the way. Prompting has been made due to year years of data of that was used to train these models.
Hamza Farooq
00:11:30
It requires a specific version of English
Hamza Farooq
00:11:33
which is not consistent amongst any one of them in this class. So
Hamza Farooq
00:11:38
if I ask you to write, make a chargeabitty
Hamza Farooq
00:11:43
talk like a Harvard student or a Harvard professor. I can assure you that each one of us will have a different way of writing that prompt, because No. 2, 2 2 ways of writing similar.
Hamza Farooq
00:11:54
This is the problem with prompting. Prompting is not something which is sustainable. It breaks
Hamza Farooq
00:12:01
and it all comes down to one reason why, because the Llms
Hamza Farooq
00:12:08
are not deterministic. They are probabilistic.
Hamza Farooq
00:12:12
I'm gonna repeat it again.
Hamza Farooq
00:12:15
They are
Hamza Farooq
00:12:16
probabilistic, not deterministic.
Hamza Farooq
00:12:20
Which means when you run a machine learning model.
Hamza Farooq
00:12:24
Once you've trained the weights every time you have trained the weights you can reproduce the exact same output.
Hamza Farooq
00:12:32
I know there's some of you who are Terry. I wanna say you do a lot of Ml. right when you let's say you use actually Boost or Lgbm, or whatever whatever you do right.
Hamza Farooq
00:12:43
If you have, once you have created the model. That model will always predict the exact same thing for that unit test.
Hamza Farooq
00:12:51
Right? Is that correct like? It will never give you any different answer.
Hamza Farooq
00:12:55
because the weights are almost fixed, or they are fixed. The thing with elephant is, they are so nuanced and so complex
Hamza Farooq
00:13:05
that you can.
Hamza Farooq
00:13:07
And because you're using a community server. Always remember you use a community server when you use chat. GPT. That is dedicated. You know that you go to the interface. You're using a community version of that product
Hamza Farooq
00:13:20
which means the weights are continuously changing. And it is almost impossible for you
Hamza Farooq
00:13:27
to recreate that exact output, using the same prompt, maybe 10 min apart.
Hamza Farooq
00:13:35
What that means is that this model is continuously changing its weights. It's continuing, changing the way it is supposed to create a output.
Hamza Farooq
00:13:43
So what happens then?
Hamza Farooq
00:13:46
Pretty clear? You will not have the ability to produce the same results that you want.
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
1