episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, machine capable to pick up one function from the admissible set of function.
https://karpathy.ai/lexicap/0005-large.html#00:18:54.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But set of admissible function can be big.
https://karpathy.ai/lexicap/0005-large.html#00:19:02.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, it contain all continuous functions and it's useless.
https://karpathy.ai/lexicap/0005-large.html#00:19:07.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You don't have so many examples to pick up function.
https://karpathy.ai/lexicap/0005-large.html#00:19:11.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But it can be small.
https://karpathy.ai/lexicap/0005-large.html#00:19:15.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Small, we call it capacity but maybe better called diversity.
https://karpathy.ai/lexicap/0005-large.html#00:19:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, not very different function in the set.
https://karpathy.ai/lexicap/0005-large.html#00:19:24.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It's infinite set of function but not very diverse.
https://karpathy.ai/lexicap/0005-large.html#00:19:27.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, it is small VC dimension.
https://karpathy.ai/lexicap/0005-large.html#00:19:31.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
When VC dimension is small, you need small amount of training data.
https://karpathy.ai/lexicap/0005-large.html#00:19:34.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, the goal is to create admissible set of functions which is have small VC dimension and contain good function.
https://karpathy.ai/lexicap/0005-large.html#00:19:41.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Then you will be able to pick up the function using small amount of observations.
https://karpathy.ai/lexicap/0005-large.html#00:19:53.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, that is the task of learning?
https://karpathy.ai/lexicap/0005-large.html#00:20:02.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:20:06.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Is creating a set of admissible functions that has a small VC dimension and then you've figure out a clever way of picking up?
https://karpathy.ai/lexicap/0005-large.html#00:20:07.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
No, that is goal of learning which I formulated yesterday.
https://karpathy.ai/lexicap/0005-large.html#00:20:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Statistical learning theory does not involve in creating admissible set of function.
https://karpathy.ai/lexicap/0005-large.html#00:20:22.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
In classical learning theory, everywhere, 100% in textbook, the set of function, admissible set of function is given.
https://karpathy.ai/lexicap/0005-large.html#00:20:30.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But this is science about nothing because the most difficult problem to create admissible set of functions
https://karpathy.ai/lexicap/0005-large.html#00:20:39.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
given, say, a lot of functions, continuum set of function, create admissible set of functions.
https://karpathy.ai/lexicap/0005-large.html#00:20:47.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That means that it has finite VC dimension, small VC dimension and contain good function.
https://karpathy.ai/lexicap/0005-large.html#00:20:55.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, this was out of consideration.
https://karpathy.ai/lexicap/0005-large.html#00:21:02.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, what's the process of doing that?
https://karpathy.ai/lexicap/0005-large.html#00:21:05.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I mean, it's fascinating.
https://karpathy.ai/lexicap/0005-large.html#00:21:07.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What is the process of creating this admissible set of functions?
https://karpathy.ai/lexicap/0005-large.html#00:21:08.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That is invariant.
https://karpathy.ai/lexicap/0005-large.html#00:21:13.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
That's invariant.
https://karpathy.ai/lexicap/0005-large.html#00:21:15.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah, you're looking of properties of training data and properties means that you have some function
https://karpathy.ai/lexicap/0005-large.html#00:21:16.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
and you just count what is value, average value of function on training data.
https://karpathy.ai/lexicap/0005-large.html#00:21:30.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You have model and what is expectation of this function on the model and they should coincide.
https://karpathy.ai/lexicap/0005-large.html#00:21:39.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, the problem is about how to pick up functions.
https://karpathy.ai/lexicap/0005-large.html#00:21:46.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It can be any function.
https://karpathy.ai/lexicap/0005-large.html#00:21:51.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
In fact, it is true for all functions.
https://karpathy.ai/lexicap/0005-large.html#00:21:54.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But because when we're talking, say, duck does not jumping, so you don't ask question jump like a duck
https://karpathy.ai/lexicap/0005-large.html#00:22:00.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
because it is trivial.
https://karpathy.ai/lexicap/0005-large.html#00:22:11.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It does not jumping and doesn't help you to recognize jump.
https://karpathy.ai/lexicap/0005-large.html#00:22:13.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But you know something, which question to ask and you're asking it seems like a duck,
https://karpathy.ai/lexicap/0005-large.html#00:22:16.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
but looks like a duck at this general situation.
https://karpathy.ai/lexicap/0005-large.html#00:22:24.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Looks like, say, guy who have this illness, this disease.
https://karpathy.ai/lexicap/0005-large.html#00:22:28.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is legal.
https://karpathy.ai/lexicap/0005-large.html#00:22:36.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, there is a general type of predicate looks like and special type of predicate,
https://karpathy.ai/lexicap/0005-large.html#00:22:39.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
which related to this specific problem.
https://karpathy.ai/lexicap/0005-large.html#00:22:47.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And that is intelligence part of all this business and that where teacher is involved.
https://karpathy.ai/lexicap/0005-large.html#00:22:51.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Incorporating the specialized predicates.
https://karpathy.ai/lexicap/0005-large.html#00:22:56.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What do you think about deep learning as neural networks, these arbitrary architectures
https://karpathy.ai/lexicap/0005-large.html#00:23:01.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
as helping accomplish some of the tasks you're thinking about?
https://karpathy.ai/lexicap/0005-large.html#00:23:08.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Their effectiveness or lack thereof?
https://karpathy.ai/lexicap/0005-large.html#00:23:13.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
What are the weaknesses and what are the possible strengths?
https://karpathy.ai/lexicap/0005-large.html#00:23:15.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You know, I think that this is fantasy, everything which like deep learning, like features.
https://karpathy.ai/lexicap/0005-large.html#00:23:20.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Let me give you this example.
https://karpathy.ai/lexicap/0005-large.html#00:23:29.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
One of the greatest books is Churchill book about history of Second World War.
https://karpathy.ai/lexicap/0005-large.html#00:23:34.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And he started this book describing that in old time when war is over, so the great kings,
https://karpathy.ai/lexicap/0005-large.html#00:23:39.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
they gathered together, almost all of them were relatives, and they discussed what should
https://karpathy.ai/lexicap/0005-large.html#00:23:53.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
be done, how to create peace.
https://karpathy.ai/lexicap/0005-large.html#00:24:00.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And they came to agreement.
https://karpathy.ai/lexicap/0005-large.html#00:24:03.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And when happened First World War, the general public came in power.
https://karpathy.ai/lexicap/0005-large.html#00:24:05.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And they were so greedy that robbed Germany.
https://karpathy.ai/lexicap/0005-large.html#00:24:13.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And it was clear for everybody that it is not peace, that peace will last only 20 years
https://karpathy.ai/lexicap/0005-large.html#00:24:18.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
because they were not professionals.
https://karpathy.ai/lexicap/0005-large.html#00:24:24.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And the same I see in machine learning.
https://karpathy.ai/lexicap/0005-large.html#00:24:28.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
There are mathematicians who are looking for the problem from a very deep point of view,
https://karpathy.ai/lexicap/0005-large.html#00:24:32.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
mathematical point of view.
https://karpathy.ai/lexicap/0005-large.html#00:24:38.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And there are computer scientists who mostly does not know mathematics.
https://karpathy.ai/lexicap/0005-large.html#00:24:40.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
They just have interpretation of that.
https://karpathy.ai/lexicap/0005-large.html#00:24:46.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And they invented a lot of blah, blah, blah interpretations like deep learning.
https://karpathy.ai/lexicap/0005-large.html#00:24:49.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Why you need deep learning?
https://karpathy.ai/lexicap/0005-large.html#00:24:54.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Mathematic does not know deep learning.
https://karpathy.ai/lexicap/0005-large.html#00:24:55.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Mathematic does not know neurons.
https://karpathy.ai/lexicap/0005-large.html#00:24:57.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It is just function.
https://karpathy.ai/lexicap/0005-large.html#00:25:00.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
If you like to say piecewise linear function, say that and do in class of piecewise linear
https://karpathy.ai/lexicap/0005-large.html#00:25:02.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
function.
https://karpathy.ai/lexicap/0005-large.html#00:25:09.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
But they invent something.
https://karpathy.ai/lexicap/0005-large.html#00:25:10.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And then they try to prove advantage of that through interpretations, which mostly wrong.
https://karpathy.ai/lexicap/0005-large.html#00:25:12.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And when it's not enough, they appeal to brain, which they know nothing about that.
https://karpathy.ai/lexicap/0005-large.html#00:25:22.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Nobody knows what's going on in the brain.
https://karpathy.ai/lexicap/0005-large.html#00:25:27.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
So, I think that more reliable work on math.
https://karpathy.ai/lexicap/0005-large.html#00:25:30.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
This is a mathematical problem.
https://karpathy.ai/lexicap/0005-large.html#00:25:34.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Do your best to solve this problem.
https://karpathy.ai/lexicap/0005-large.html#00:25:36.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Try to understand that there is not only one way of convergence, which is strong way of
https://karpathy.ai/lexicap/0005-large.html#00:25:39.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
convergence.
https://karpathy.ai/lexicap/0005-large.html#00:25:45.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
There is a weak way of convergence, which requires predicate.
https://karpathy.ai/lexicap/0005-large.html#00:25:46.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And if you will go through all this stuff, you will see that you don't need deep learning.
https://karpathy.ai/lexicap/0005-large.html#00:25:49.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Even more, I would say one of the theory, which called represented theory.
https://karpathy.ai/lexicap/0005-large.html#00:25:56.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It says that optimal solution of mathematical problem, which is described learning is on
https://karpathy.ai/lexicap/0005-large.html#00:26:03.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
shadow network, not on deep learning.
https://karpathy.ai/lexicap/0005-large.html#00:26:16.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
And a shallow network.
https://karpathy.ai/lexicap/0005-large.html#00:26:21.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Yeah.
https://karpathy.ai/lexicap/0005-large.html#00:26:22.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The ultimate problem is there.
https://karpathy.ai/lexicap/0005-large.html#00:26:23.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
Absolutely.
https://karpathy.ai/lexicap/0005-large.html#00:26:24.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
In the end, what you're saying is exactly right.
https://karpathy.ai/lexicap/0005-large.html#00:26:25.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The question is you have no value for throwing something on the table, playing with it, not
https://karpathy.ai/lexicap/0005-large.html#00:26:29.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
math.
https://karpathy.ai/lexicap/0005-large.html#00:26:37.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
It's like a neural network where you said throwing something in the bucket or the biological
https://karpathy.ai/lexicap/0005-large.html#00:26:38.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
example and looking at kings and queens or the cells or the microscope.
https://karpathy.ai/lexicap/0005-large.html#00:26:43.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You don't see value in imagining the cells or kings and queens and using that as inspiration
https://karpathy.ai/lexicap/0005-large.html#00:26:47.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
and imagination for where the math will eventually lead you.
https://karpathy.ai/lexicap/0005-large.html#00:26:55.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
You think that interpretation basically deceives you in a way that's not productive.
https://karpathy.ai/lexicap/0005-large.html#00:26:59.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
I think that if you're trying to analyze this business of learning and especially discussion
https://karpathy.ai/lexicap/0005-large.html#00:27:06.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
about deep learning, it is discussion about interpretation, not about things, about what
https://karpathy.ai/lexicap/0005-large.html#00:27:17.120
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
you can say about things.
https://karpathy.ai/lexicap/0005-large.html#00:27:24.120