episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
was you should have at least twice the number of data
https://karpathy.ai/lexicap/0013-large.html#00:42:04.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
than the number of parameters.
https://karpathy.ai/lexicap/0013-large.html#00:42:09.720
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Maybe 10 times is better.
https://karpathy.ai/lexicap/0013-large.html#00:42:12.880
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Now, the way you train neural networks these days
https://karpathy.ai/lexicap/0013-large.html#00:42:15.480
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
is that they have 10 or 100 times more parameters
https://karpathy.ai/lexicap/0013-large.html#00:42:19.560
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
than data, exactly the opposite.
https://karpathy.ai/lexicap/0013-large.html#00:42:23.480
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
And it has been one of the puzzles about neural networks.
https://karpathy.ai/lexicap/0013-large.html#00:42:26.760
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
How can you get something that really works
https://karpathy.ai/lexicap/0013-large.html#00:42:34.080
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
when you have so much freedom?
https://karpathy.ai/lexicap/0013-large.html#00:42:37.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
From that little data, it can generalize somehow.
https://karpathy.ai/lexicap/0013-large.html#00:42:40.640
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Right, exactly.
https://karpathy.ai/lexicap/0013-large.html#00:42:43.000
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Do you think the stochastic nature of it
https://karpathy.ai/lexicap/0013-large.html#00:42:44.200
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
is essential, the randomness?
https://karpathy.ai/lexicap/0013-large.html#00:42:46.400
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So I think we have some initial understanding
https://karpathy.ai/lexicap/0013-large.html#00:42:48.160
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
why this happens.
https://karpathy.ai/lexicap/0013-large.html#00:42:50.640
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
But one nice side effect of having
https://karpathy.ai/lexicap/0013-large.html#00:42:52.240
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
this overparameterization, more parameters than data,
https://karpathy.ai/lexicap/0013-large.html#00:42:56.480
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
is that when you look for the minima of a loss function,
https://karpathy.ai/lexicap/0013-large.html#00:43:00.920
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
like stochastic gradient descent is doing,
https://karpathy.ai/lexicap/0013-large.html#00:43:04.720
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
you find I made some calculations based
https://karpathy.ai/lexicap/0013-large.html#00:43:08.240
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
on some old basic theorem of algebra called the Bezu
https://karpathy.ai/lexicap/0013-large.html#00:43:12.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
theorem that gives you an estimate of the number
https://karpathy.ai/lexicap/0013-large.html#00:43:19.040
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
of solution of a system of polynomial equation.
https://karpathy.ai/lexicap/0013-large.html#00:43:23.240
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Anyway, the bottom line is that there are probably
https://karpathy.ai/lexicap/0013-large.html#00:43:25.960
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
more minima for a typical deep networks
https://karpathy.ai/lexicap/0013-large.html#00:43:30.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
than atoms in the universe.
https://karpathy.ai/lexicap/0013-large.html#00:43:36.080
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Just to say, there are a lot because
https://karpathy.ai/lexicap/0013-large.html#00:43:39.480
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
of the overparameterization.
https://karpathy.ai/lexicap/0013-large.html#00:43:42.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
A more global minimum, zero minimum, good minimum.
https://karpathy.ai/lexicap/0013-large.html#00:43:44.760
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
A more global minima.
https://karpathy.ai/lexicap/0013-large.html#00:43:50.280
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Yeah, a lot of them.
https://karpathy.ai/lexicap/0013-large.html#00:43:51.560
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So you have a lot of solutions.
https://karpathy.ai/lexicap/0013-large.html#00:43:53.200
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So it's not so surprising that you can find them
https://karpathy.ai/lexicap/0013-large.html#00:43:54.560
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
relatively easily.
https://karpathy.ai/lexicap/0013-large.html#00:43:57.920
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
And this is because of the overparameterization.
https://karpathy.ai/lexicap/0013-large.html#00:44:00.400
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
The overparameterization sprinkles that entire space
https://karpathy.ai/lexicap/0013-large.html#00:44:04.200
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
with solutions that are pretty good.
https://karpathy.ai/lexicap/0013-large.html#00:44:07.920
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
It's not so surprising, right?
https://karpathy.ai/lexicap/0013-large.html#00:44:09.720
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
It's like if you have a system of linear equation
https://karpathy.ai/lexicap/0013-large.html#00:44:11.240
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
and you have more unknowns than equations, then you have,
https://karpathy.ai/lexicap/0013-large.html#00:44:14.400
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
we know, you have an infinite number of solutions.
https://karpathy.ai/lexicap/0013-large.html#00:44:18.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
And the question is to pick one.
https://karpathy.ai/lexicap/0013-large.html#00:44:22.040
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
That's another story.
https://karpathy.ai/lexicap/0013-large.html#00:44:24.480
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
But you have an infinite number of solutions.
https://karpathy.ai/lexicap/0013-large.html#00:44:25.440
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So there are a lot of value of your unknowns
https://karpathy.ai/lexicap/0013-large.html#00:44:27.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
that satisfy the equations.
https://karpathy.ai/lexicap/0013-large.html#00:44:31.040
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
But it's possible that there's a lot of those solutions that
https://karpathy.ai/lexicap/0013-large.html#00:44:33.160
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
aren't very good.
https://karpathy.ai/lexicap/0013-large.html#00:44:36.360
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
What's surprising is that they're pretty good.
https://karpathy.ai/lexicap/0013-large.html#00:44:37.560
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So that's a good question.
https://karpathy.ai/lexicap/0013-large.html#00:44:39.160
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Why can you pick one that generalizes well?
https://karpathy.ai/lexicap/0013-large.html#00:44:40.160
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Yeah.
https://karpathy.ai/lexicap/0013-large.html#00:44:42.840
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
That's a separate question with separate answers.
https://karpathy.ai/lexicap/0013-large.html#00:44:44.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
One theorem that people like to talk about that kind of
https://karpathy.ai/lexicap/0013-large.html#00:44:47.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
inspires imagination of the power of neural networks
https://karpathy.ai/lexicap/0013-large.html#00:44:51.160
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
is the universality, universal approximation theorem,
https://karpathy.ai/lexicap/0013-large.html#00:44:53.800
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
that you can approximate any computable function
https://karpathy.ai/lexicap/0013-large.html#00:44:57.840
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
with just a finite number of neurons
https://karpathy.ai/lexicap/0013-large.html#00:45:00.960
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
in a single hidden layer.
https://karpathy.ai/lexicap/0013-large.html#00:45:02.840
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Do you find this theorem one surprising?
https://karpathy.ai/lexicap/0013-large.html#00:45:04.360
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Do you find it useful, interesting, inspiring?
https://karpathy.ai/lexicap/0013-large.html#00:45:07.680
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
No, this one, I never found it very surprising.
https://karpathy.ai/lexicap/0013-large.html#00:45:12.600
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
It was known since the 80s, since I entered the field,
https://karpathy.ai/lexicap/0013-large.html#00:45:16.440
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
because it's basically the same as Weierstrass theorem, which
https://karpathy.ai/lexicap/0013-large.html#00:45:22.640
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
says that I can approximate any continuous function
https://karpathy.ai/lexicap/0013-large.html#00:45:27.560
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
with a polynomial of sufficiently,
https://karpathy.ai/lexicap/0013-large.html#00:45:32.000
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
with a sufficient number of terms, monomials.
https://karpathy.ai/lexicap/0013-large.html#00:45:34.560
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So basically the same.
https://karpathy.ai/lexicap/0013-large.html#00:45:38.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
And the proofs are very similar.
https://karpathy.ai/lexicap/0013-large.html#00:45:39.360
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So your intuition was there was never
https://karpathy.ai/lexicap/0013-large.html#00:45:41.680
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
any doubt that neural networks in theory
https://karpathy.ai/lexicap/0013-large.html#00:45:43.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
could be very strong approximators.
https://karpathy.ai/lexicap/0013-large.html#00:45:45.680
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Right.
https://karpathy.ai/lexicap/0013-large.html#00:45:48.000
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
The question, the interesting question,
https://karpathy.ai/lexicap/0013-large.html#00:45:48.800
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
is that if this theorem says you can approximate, fine.
https://karpathy.ai/lexicap/0013-large.html#00:45:50.760
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
But when you ask how many neurons, for instance,
https://karpathy.ai/lexicap/0013-large.html#00:45:58.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
or in the case of polynomial, how many monomials,
https://karpathy.ai/lexicap/0013-large.html#00:46:03.200
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
I need to get a good approximation.
https://karpathy.ai/lexicap/0013-large.html#00:46:06.400
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Then it turns out that that depends
https://karpathy.ai/lexicap/0013-large.html#00:46:11.360
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
on the dimensionality of your function,
https://karpathy.ai/lexicap/0013-large.html#00:46:15.960
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
how many variables you have.
https://karpathy.ai/lexicap/0013-large.html#00:46:18.080
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
But it depends on the dimensionality
https://karpathy.ai/lexicap/0013-large.html#00:46:20.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
of your function in a bad way.
https://karpathy.ai/lexicap/0013-large.html#00:46:22.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
It's, for instance, suppose you want
https://karpathy.ai/lexicap/0013-large.html#00:46:25.080
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
an error which is no worse than 10% in your approximation.
https://karpathy.ai/lexicap/0013-large.html#00:46:28.000
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
You come up with a network that approximate your function
https://karpathy.ai/lexicap/0013-large.html#00:46:35.040
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
within 10%.
https://karpathy.ai/lexicap/0013-large.html#00:46:38.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Then it turns out that the number of units you need
https://karpathy.ai/lexicap/0013-large.html#00:46:40.440
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
are in the order of 10 to the dimensionality, d,
https://karpathy.ai/lexicap/0013-large.html#00:46:44.520
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
how many variables.
https://karpathy.ai/lexicap/0013-large.html#00:46:48.360
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
So if you have two variables, these two words,
https://karpathy.ai/lexicap/0013-large.html#00:46:50.080
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
you have 100 units and OK.
https://karpathy.ai/lexicap/0013-large.html#00:46:54.840
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
But if you have, say, 200 by 200 pixel images,
https://karpathy.ai/lexicap/0013-large.html#00:46:57.240
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
now this is 40,000, whatever.
https://karpathy.ai/lexicap/0013-large.html#00:47:02.920
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
We again go to the size of the universe pretty quickly.
https://karpathy.ai/lexicap/0013-large.html#00:47:06.840
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
Exactly, 10 to the 40,000 or something.
https://karpathy.ai/lexicap/0013-large.html#00:47:09.800
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
And so this is called the curse of dimensionality,
https://karpathy.ai/lexicap/0013-large.html#00:47:14.120
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
not quite appropriately.
https://karpathy.ai/lexicap/0013-large.html#00:47:18.680
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
And the hope is with the extra layers,
https://karpathy.ai/lexicap/0013-large.html#00:47:22.280
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
you can remove the curse.
https://karpathy.ai/lexicap/0013-large.html#00:47:24.200