episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But if I will ask you what teacher does, nobody knows.
https://karpathy.ai/lexicap/0005-large.html#00:11:54.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And that is intelligence.
https://karpathy.ai/lexicap/0005-large.html#00:11:59.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But we know from history and now from math and machine learning that teacher can do a lot.
https://karpathy.ai/lexicap/0005-large.html#00:12:01.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, what from a mathematical point of view is the great teacher?
https://karpathy.ai/lexicap/0005-large.html#00:12:12.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I don't know.
https://karpathy.ai/lexicap/0005-large.html#00:12:16.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That's an open question.
https://karpathy.ai/lexicap/0005-large.html#00:12:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
No, but we can say what teacher can do.
https://karpathy.ai/lexicap/0005-large.html#00:12:18.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
He can introduce some invariants, some predicate for creating invariants.
https://karpathy.ai/lexicap/0005-large.html#00:12:25.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
How he doing it?
https://karpathy.ai/lexicap/0005-large.html#00:12:32.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I don't know because teacher knows reality and can describe from this reality a predicate, invariants.
https://karpathy.ai/lexicap/0005-large.html#00:12:33.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But he knows that when you're using invariant, you can decrease number of observations hundred times.
https://karpathy.ai/lexicap/0005-large.html#00:12:41.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, but maybe try to pull that apart a little bit.
https://karpathy.ai/lexicap/0005-large.html#00:12:49.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I think you mentioned like a piano teacher saying to the student, play like a butterfly.
https://karpathy.ai/lexicap/0005-large.html#00:12:53.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:12:59.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I play piano.
https://karpathy.ai/lexicap/0005-large.html#00:13:00.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I play guitar for a long time.
https://karpathy.ai/lexicap/0005-large.html#00:13:01.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah, maybe it's romantic, poetic, but it feels like there's a lot of truth in that statement.
https://karpathy.ai/lexicap/0005-large.html#00:13:03.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Like there is a lot of instruction in that statement.
https://karpathy.ai/lexicap/0005-large.html#00:13:12.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And so, can you pull that apart?
https://karpathy.ai/lexicap/0005-large.html#00:13:15.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What is that?
https://karpathy.ai/lexicap/0005-large.html#00:13:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The language itself may not contain this information.
https://karpathy.ai/lexicap/0005-large.html#00:13:19.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is not blah, blah, blah.
https://karpathy.ai/lexicap/0005-large.html#00:13:22.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is not blah, blah, blah.
https://karpathy.ai/lexicap/0005-large.html#00:13:24.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It affects you.
https://karpathy.ai/lexicap/0005-large.html#00:13:25.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It's what?
https://karpathy.ai/lexicap/0005-large.html#00:13:26.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It affects you.
https://karpathy.ai/lexicap/0005-large.html#00:13:27.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It affects your playing.
https://karpathy.ai/lexicap/0005-large.html#00:13:28.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yes, it does, but it's not the laying.
https://karpathy.ai/lexicap/0005-large.html#00:13:29.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It feels like what is the information being exchanged there?
https://karpathy.ai/lexicap/0005-large.html#00:13:34.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What is the nature of information?
https://karpathy.ai/lexicap/0005-large.html#00:13:38.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What is the representation of that information?
https://karpathy.ai/lexicap/0005-large.html#00:13:39.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I believe that it is sort of predicate, but I don't know.
https://karpathy.ai/lexicap/0005-large.html#00:13:41.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That is exactly what intelligence and machine learning should be.
https://karpathy.ai/lexicap/0005-large.html#00:13:45.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yes.
https://karpathy.ai/lexicap/0005-large.html#00:13:49.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Because the rest is just mathematical technique.
https://karpathy.ai/lexicap/0005-large.html#00:13:50.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I think that what was discovered recently is that there is two mechanism of learning.
https://karpathy.ai/lexicap/0005-large.html#00:13:53.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
One called strong convergence mechanism and weak convergence mechanism.
https://karpathy.ai/lexicap/0005-large.html#00:14:03.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Before, people use only one convergence.
https://karpathy.ai/lexicap/0005-large.html#00:14:08.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
In weak convergence mechanism, you can use predicate.
https://karpathy.ai/lexicap/0005-large.html#00:14:11.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That's what play like butterfly and it will immediately affect your playing.
https://karpathy.ai/lexicap/0005-large.html#00:14:16.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You know, there is English proverb, great.
https://karpathy.ai/lexicap/0005-large.html#00:14:23.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
If it looks like a duck, swims like a duck, and quack like a duck, then it is probably duck.
https://karpathy.ai/lexicap/0005-large.html#00:14:27.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yes.
https://karpathy.ai/lexicap/0005-large.html#00:14:35.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But this is exact about predicate.
https://karpathy.ai/lexicap/0005-large.html#00:14:36.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Looks like a duck, what it means.
https://karpathy.ai/lexicap/0005-large.html#00:14:40.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You saw many ducks that you're training data.
https://karpathy.ai/lexicap/0005-large.html#00:14:43.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, you have description of how looks integral looks ducks.
https://karpathy.ai/lexicap/0005-large.html#00:14:47.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:14:56.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The visual characteristics of a duck.
https://karpathy.ai/lexicap/0005-large.html#00:14:57.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:14:59.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But you want and you have model for recognition.
https://karpathy.ai/lexicap/0005-large.html#00:15:00.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, you would like so that theoretical description from model coincide with empirical description,
https://karpathy.ai/lexicap/0005-large.html#00:15:04.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
which you saw on territory.
https://karpathy.ai/lexicap/0005-large.html#00:15:12.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, about looks like a duck, it is general.
https://karpathy.ai/lexicap/0005-large.html#00:15:14.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But what about swims like a duck?
https://karpathy.ai/lexicap/0005-large.html#00:15:18.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You should know that duck swims.
https://karpathy.ai/lexicap/0005-large.html#00:15:21.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You can say it play chess like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:15:23.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Okay.
https://karpathy.ai/lexicap/0005-large.html#00:15:26.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Duck doesn't play chess.
https://karpathy.ai/lexicap/0005-large.html#00:15:27.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And it is completely legal predicate, but it is useless.
https://karpathy.ai/lexicap/0005-large.html#00:15:29.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, half teacher can recognize not useless predicate.
https://karpathy.ai/lexicap/0005-large.html#00:15:35.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, up to now, we don't use this predicate in existing machine learning.
https://karpathy.ai/lexicap/0005-large.html#00:15:41.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, why we need zillions of data.
https://karpathy.ai/lexicap/0005-large.html#00:15:47.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But in this English proverb, they use only three predicate.
https://karpathy.ai/lexicap/0005-large.html#00:15:50.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Looks like a duck, swims like a duck, and quack like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:15:55.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, you can't deny the fact that swims like a duck and quacks like a duck has humor in it, has ambiguity.
https://karpathy.ai/lexicap/0005-large.html#00:15:59.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Let's talk about swim like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:16:08.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It doesn't say jump like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:16:12.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Why?
https://karpathy.ai/lexicap/0005-large.html#00:16:16.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Because...
https://karpathy.ai/lexicap/0005-large.html#00:16:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It's not relevant.
https://karpathy.ai/lexicap/0005-large.html#00:16:18.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But that means that you know ducks, you know different birds, you know animals.
https://karpathy.ai/lexicap/0005-large.html#00:16:20.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And you derive from this that it is relevant to say swim like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:16:27.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, underneath, in order for us to understand swims like a duck, it feels like we need to know millions of other little pieces of information.
https://karpathy.ai/lexicap/0005-large.html#00:16:32.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Which we pick up along the way.
https://karpathy.ai/lexicap/0005-large.html#00:16:42.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You don't think so.
https://karpathy.ai/lexicap/0005-large.html#00:16:44.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
There doesn't need to be this knowledge base in those statements carries some rich information that helps us understand the essence of duck.
https://karpathy.ai/lexicap/0005-large.html#00:16:45.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:16:55.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
How far are we from integrating predicates?
https://karpathy.ai/lexicap/0005-large.html#00:16:57.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You know that when you consider complete theory of machine learning.
https://karpathy.ai/lexicap/0005-large.html#00:17:01.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, what it does, you have a lot of functions.
https://karpathy.ai/lexicap/0005-large.html#00:17:07.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And then you're talking it looks like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:17:12.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You see your training data.
https://karpathy.ai/lexicap/0005-large.html#00:17:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
From training data you recognize like expected duck should look.
https://karpathy.ai/lexicap/0005-large.html#00:17:20.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Then you remove all functions which does not look like you think it should look from training data.
https://karpathy.ai/lexicap/0005-large.html#00:17:31.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, you decrease amount of function from which you pick up one.
https://karpathy.ai/lexicap/0005-large.html#00:17:40.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Then you give a second predicate and again decrease the set of function.
https://karpathy.ai/lexicap/0005-large.html#00:17:46.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And after that you pick up the best function you can find.
https://karpathy.ai/lexicap/0005-large.html#00:17:52.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is standard machine learning.
https://karpathy.ai/lexicap/0005-large.html#00:17:56.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, why you need not too many examples?
https://karpathy.ai/lexicap/0005-large.html#00:17:58.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Because your predicates aren't very good?
https://karpathy.ai/lexicap/0005-large.html#00:18:03.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That means that predicates are very good because every predicate is invented to decrease admissible set of function.
https://karpathy.ai/lexicap/0005-large.html#00:18:06.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, you talk about admissible set of functions and you talk about good functions.
https://karpathy.ai/lexicap/0005-large.html#00:18:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, what makes a good function?
https://karpathy.ai/lexicap/0005-large.html#00:18:22.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, admissible set of function is set of function which has small capacity or small diversity, small VC dimension example.
https://karpathy.ai/lexicap/0005-large.html#00:18:24.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Which contain good function inside.
https://karpathy.ai/lexicap/0005-large.html#00:18:35.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, by the way for people who don't know VC, you're the V in the VC.
https://karpathy.ai/lexicap/0005-large.html#00:18:37.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, how would you describe to lay person what VC theory is?
https://karpathy.ai/lexicap/0005-large.html#00:18:45.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
How would you describe VC?
https://karpathy.ai/lexicap/0005-large.html#00:18:50.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, when you have a machine.
https://karpathy.ai/lexicap/0005-large.html#00:18:52.120