episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You see the mention, you can prove some theorems.
https://karpathy.ai/lexicap/0005-large.html#00:35:01.720
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But we also create theory for case when you know probability measure.
https://karpathy.ai/lexicap/0005-large.html#00:35:05.220
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And that is the best case which can happen, it is entropy theory.
https://karpathy.ai/lexicap/0005-large.html#00:35:12.680
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So from mathematical point of view, you know the best possible case and the worst possible
https://karpathy.ai/lexicap/0005-large.html#00:35:18.080
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
case.
https://karpathy.ai/lexicap/0005-large.html#00:35:24.080
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You can derive different model in medium, but it's not so interesting.
https://karpathy.ai/lexicap/0005-large.html#00:35:25.080
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You think the edges are interesting?
https://karpathy.ai/lexicap/0005-large.html#00:35:30.480
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The edges are interesting because it is not so easy to get good bound, exact bound.
https://karpathy.ai/lexicap/0005-large.html#00:35:33.440
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It's not many cases where you have the bound is not exact.
https://karpathy.ai/lexicap/0005-large.html#00:35:44.720
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But interesting principles which discover the mass.
https://karpathy.ai/lexicap/0005-large.html#00:35:49.280
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Do you think it's interesting because it's challenging and reveals interesting principles
https://karpathy.ai/lexicap/0005-large.html#00:35:54.840
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
that allow you to get those bounds?
https://karpathy.ai/lexicap/0005-large.html#00:36:00.340
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Or do you think it's interesting because it's actually very useful for understanding the
https://karpathy.ai/lexicap/0005-large.html#00:36:02.700
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
essence of a function of an algorithm?
https://karpathy.ai/lexicap/0005-large.html#00:36:06.700
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So it's like me judging your life as a human being by the worst thing you did and the best
https://karpathy.ai/lexicap/0005-large.html#00:36:11.080
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
thing you did versus all the stuff in the middle.
https://karpathy.ai/lexicap/0005-large.html#00:36:17.680
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It seems not productive.
https://karpathy.ai/lexicap/0005-large.html#00:36:21.840
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I don't think so because you cannot describe situation in the middle.
https://karpathy.ai/lexicap/0005-large.html#00:36:24.520
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So it will be not general.
https://karpathy.ai/lexicap/0005-large.html#00:36:31.520
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So you can describe edges cases and it is clear it has some model, but you cannot describe
https://karpathy.ai/lexicap/0005-large.html#00:36:34.600
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
model for every new case.
https://karpathy.ai/lexicap/0005-large.html#00:36:44.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So you will be never accurate when you're using model.
https://karpathy.ai/lexicap/0005-large.html#00:36:47.720
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But from a statistical point of view, the way you've studied functions and the nature
https://karpathy.ai/lexicap/0005-large.html#00:36:53.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
of learning in the world, don't you think that the real world has a very long tail?
https://karpathy.ai/lexicap/0005-large.html#00:36:59.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That the edge cases are very far away from the mean, the stuff in the middle or no?
https://karpathy.ai/lexicap/0005-large.html#00:37:07.760
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I don't know that.
https://karpathy.ai/lexicap/0005-large.html#00:37:19.520
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I think that from my point of view, if you will use formal statistic, you need uniform
https://karpathy.ai/lexicap/0005-large.html#00:37:21.520
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
law of large numbers.
https://karpathy.ai/lexicap/0005-large.html#00:37:36.920
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
If you will use this invariance business, you will need just law of large numbers.
https://karpathy.ai/lexicap/0005-large.html#00:37:40.300
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And there's this huge difference between uniform law of large numbers and large numbers.
https://karpathy.ai/lexicap/0005-large.html#00:37:52.240
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Is it useful to describe that a little more or should we just take it to...
https://karpathy.ai/lexicap/0005-large.html#00:37:56.760
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
For example, when I'm talking about duck, I give three predicates and that was enough.
https://karpathy.ai/lexicap/0005-large.html#00:38:01.880
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But if you will try to do formal distinguish, you will need a lot of observations.
https://karpathy.ai/lexicap/0005-large.html#00:38:09.800
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So that means that information about looks like a duck contain a lot of bit of information,
https://karpathy.ai/lexicap/0005-large.html#00:38:19.760
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
formal bits of information.
https://karpathy.ai/lexicap/0005-large.html#00:38:27.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So we don't know that how much bit of information contain things from artificial and from intelligence.
https://karpathy.ai/lexicap/0005-large.html#00:38:29.860
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And that is the subject of analysis.
https://karpathy.ai/lexicap/0005-large.html#00:38:39.880
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Till now, all business, I don't like how people consider artificial intelligence.
https://karpathy.ai/lexicap/0005-large.html#00:38:42.440
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
They consider us some codes which imitate activity of human being.
https://karpathy.ai/lexicap/0005-large.html#00:38:54.780
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is not science, it is applications.
https://karpathy.ai/lexicap/0005-large.html#00:39:01.240
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You would like to imitate go ahead, it is very useful and a good problem.
https://karpathy.ai/lexicap/0005-large.html#00:39:03.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But you need to learn something more.
https://karpathy.ai/lexicap/0005-large.html#00:39:09.760
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
How people try to do, how people can to develop, say, predicates seems like a duck or play
https://karpathy.ai/lexicap/0005-large.html#00:39:15.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
like butterfly or something like that.
https://karpathy.ai/lexicap/0005-large.html#00:39:25.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Not the teacher says you, how it came in his mind, how he choose this image.
https://karpathy.ai/lexicap/0005-large.html#00:39:29.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So that process...
https://karpathy.ai/lexicap/0005-large.html#00:39:37.000
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That is problem of intelligence.
https://karpathy.ai/lexicap/0005-large.html#00:39:38.000
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That is the problem of intelligence and you see that connected to the problem of learning?
https://karpathy.ai/lexicap/0005-large.html#00:39:39.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Absolutely.
https://karpathy.ai/lexicap/0005-large.html#00:39:44.720
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Because you immediately give this predicate like specific predicate seems like a duck
https://karpathy.ai/lexicap/0005-large.html#00:39:45.720
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
or quack like a duck.
https://karpathy.ai/lexicap/0005-large.html#00:39:52.240
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It was chosen somehow.
https://karpathy.ai/lexicap/0005-large.html#00:39:54.840
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So what is the line of work, would you say?
https://karpathy.ai/lexicap/0005-large.html#00:39:57.560
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
If you were to formulate as a set of open problems, that will take us there, to play
https://karpathy.ai/lexicap/0005-large.html#00:40:01.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
like a butterfly.
https://karpathy.ai/lexicap/0005-large.html#00:40:08.680
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
We'll get a system to be able to...
https://karpathy.ai/lexicap/0005-large.html#00:40:09.680
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Let's separate two stories.
https://karpathy.ai/lexicap/0005-large.html#00:40:12.200
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
One mathematical story that if you have predicate, you can do something.
https://karpathy.ai/lexicap/0005-large.html#00:40:14.520
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And another story how to get predicate.
https://karpathy.ai/lexicap/0005-large.html#00:40:20.480
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is intelligence problem and people even did not start to understand intelligence.
https://karpathy.ai/lexicap/0005-large.html#00:40:23.840
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Because to understand intelligence, first of all, try to understand what do teachers.
https://karpathy.ai/lexicap/0005-large.html#00:40:32.280
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
How teacher teach, why one teacher better than another one.
https://karpathy.ai/lexicap/0005-large.html#00:40:39.440
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:40:43.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And so you think we really even haven't started on the journey of generating the predicates?
https://karpathy.ai/lexicap/0005-large.html#00:40:44.960
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
No.
https://karpathy.ai/lexicap/0005-large.html#00:40:50.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
We don't understand.
https://karpathy.ai/lexicap/0005-large.html#00:40:51.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
We even don't understand that this problem exists.
https://karpathy.ai/lexicap/0005-large.html#00:40:52.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Because did you hear...
https://karpathy.ai/lexicap/0005-large.html#00:40:56.880
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You do.
https://karpathy.ai/lexicap/0005-large.html#00:40:57.880
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
No, I just know name.
https://karpathy.ai/lexicap/0005-large.html#00:40:58.880
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I want to understand why one teacher better than another and how affect teacher, student.
https://karpathy.ai/lexicap/0005-large.html#00:41:02.720
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is not because he repeating the problem which is in textbook.
https://karpathy.ai/lexicap/0005-large.html#00:41:13.440
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
He makes some remarks.
https://karpathy.ai/lexicap/0005-large.html#00:41:18.520
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
He makes some philosophy of reasoning.
https://karpathy.ai/lexicap/0005-large.html#00:41:20.920
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah, that's a beautiful...
https://karpathy.ai/lexicap/0005-large.html#00:41:23.040
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So it is a formulation of a question that is the open problem.
https://karpathy.ai/lexicap/0005-large.html#00:41:24.600
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Why is one teacher better than another?
https://karpathy.ai/lexicap/0005-large.html#00:41:31.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Right.
https://karpathy.ai/lexicap/0005-large.html#00:41:34.200
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What he does better.
https://karpathy.ai/lexicap/0005-large.html#00:41:35.320
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:41:37.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What...
https://karpathy.ai/lexicap/0005-large.html#00:41:38.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What...
https://karpathy.ai/lexicap/0005-large.html#00:41:39.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Why in...
https://karpathy.ai/lexicap/0005-large.html#00:41:40.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
At every level?
https://karpathy.ai/lexicap/0005-large.html#00:41:41.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
How do they get better?
https://karpathy.ai/lexicap/0005-large.html#00:41:42.360
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What does it mean to be better?
https://karpathy.ai/lexicap/0005-large.html#00:41:45.080
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The whole...
https://karpathy.ai/lexicap/0005-large.html#00:41:48.560
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:41:49.560
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:41:50.560
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
From whatever model I have, one teacher can give a very good predicate.
https://karpathy.ai/lexicap/0005-large.html#00:41:51.560
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
One teacher can say swims like a dog and another can say jump like a dog.
https://karpathy.ai/lexicap/0005-large.html#00:41:56.800
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And jump like a dog carries zero information.
https://karpathy.ai/lexicap/0005-large.html#00:42:03.880
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So what is the most exciting problem in statistical learning you've ever worked on or are working
https://karpathy.ai/lexicap/0005-large.html#00:42:09.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
on now?
https://karpathy.ai/lexicap/0005-large.html#00:42:14.400
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I just finished this invariant story and I'm happy that...
https://karpathy.ai/lexicap/0005-large.html#00:42:17.600
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I believe that it is ultimate learning story.
https://karpathy.ai/lexicap/0005-large.html#00:42:24.560
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
At least I can show that there are no another mechanism, only two mechanisms.
https://karpathy.ai/lexicap/0005-large.html#00:42:30.600
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But they separate statistical part from intelligent part and I know nothing about intelligent
https://karpathy.ai/lexicap/0005-large.html#00:42:38.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
part.
https://karpathy.ai/lexicap/0005-large.html#00:42:46.760
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And if you will know this intelligent part, so it will help us a lot in teaching, in learning.
https://karpathy.ai/lexicap/0005-large.html#00:42:47.760