episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
You draw this, by the way, just to frame things.
https://karpathy.ai/lexicap/0010-large.html#00:22:36.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I've heard you say somewhere, it's the difference
https://karpathy.ai/lexicap/0010-large.html#00:22:39.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
between learning to master versus learning to generalize,
https://karpathy.ai/lexicap/0010-large.html#00:22:41.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that it's a nice line to think about.
https://karpathy.ai/lexicap/0010-large.html#00:22:44.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And I guess you're saying that it's a gray area
https://karpathy.ai/lexicap/0010-large.html#00:22:47.880
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
of what learning to master and learning to generalize,
https://karpathy.ai/lexicap/0010-large.html#00:22:50.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
where one starts.
https://karpathy.ai/lexicap/0010-large.html#00:22:53.680
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I think I might have heard this.
https://karpathy.ai/lexicap/0010-large.html#00:22:54.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I might have heard it somewhere else.
https://karpathy.ai/lexicap/0010-large.html#00:22:56.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And I think it might've been one of your interviews,
https://karpathy.ai/lexicap/0010-large.html#00:22:57.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
maybe the one with Yoshua Benjamin, I'm not 100% sure.
https://karpathy.ai/lexicap/0010-large.html#00:23:00.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But I liked the example, I'm not sure who it was,
https://karpathy.ai/lexicap/0010-large.html#00:23:03.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but the example was essentially,
https://karpathy.ai/lexicap/0010-large.html#00:23:08.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if you use current deep learning techniques,
https://karpathy.ai/lexicap/0010-large.html#00:23:10.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
what we're doing to predict, let's say,
https://karpathy.ai/lexicap/0010-large.html#00:23:13.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the relative motion of our planets, it would do pretty well.
https://karpathy.ai/lexicap/0010-large.html#00:23:17.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But then now if a massive new mass enters our solar system,
https://karpathy.ai/lexicap/0010-large.html#00:23:22.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
it would probably not predict what will happen, right?
https://karpathy.ai/lexicap/0010-large.html#00:23:28.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that's a different kind of generalization.
https://karpathy.ai/lexicap/0010-large.html#00:23:32.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
That's a generalization that relies
https://karpathy.ai/lexicap/0010-large.html#00:23:33.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
on the ultimate simplest, simplest explanation
https://karpathy.ai/lexicap/0010-large.html#00:23:34.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that we have available today
https://karpathy.ai/lexicap/0010-large.html#00:23:38.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to explain the motion of planets,
https://karpathy.ai/lexicap/0010-large.html#00:23:40.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
whereas just pattern recognition could predict
https://karpathy.ai/lexicap/0010-large.html#00:23:41.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
our current solar system motion pretty well, no problem.
https://karpathy.ai/lexicap/0010-large.html#00:23:43.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so I think that's an example
https://karpathy.ai/lexicap/0010-large.html#00:23:47.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
of a kind of generalization that is a little different
https://karpathy.ai/lexicap/0010-large.html#00:23:48.880
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
from what we've achieved so far.
https://karpathy.ai/lexicap/0010-large.html#00:23:52.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And it's not clear if just regularizing more
https://karpathy.ai/lexicap/0010-large.html#00:23:54.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and forcing it to come up with a simpler, simpler,
https://karpathy.ai/lexicap/0010-large.html#00:23:59.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
simpler explanation and say, look, this is not simple.
https://karpathy.ai/lexicap/0010-large.html#00:24:01.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But that's what physics researchers do, right?
https://karpathy.ai/lexicap/0010-large.html#00:24:03.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
They say, can I make this even simpler?
https://karpathy.ai/lexicap/0010-large.html#00:24:05.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
How simple can I get this?
https://karpathy.ai/lexicap/0010-large.html#00:24:08.220
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
What's the simplest equation that can explain everything?
https://karpathy.ai/lexicap/0010-large.html#00:24:09.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
The master equation for the entire dynamics of the universe,
https://karpathy.ai/lexicap/0010-large.html#00:24:12.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
we haven't really pushed that direction as hard
https://karpathy.ai/lexicap/0010-large.html#00:24:15.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in deep learning, I would say.
https://karpathy.ai/lexicap/0010-large.html#00:24:17.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Not sure if it should be pushed,
https://karpathy.ai/lexicap/0010-large.html#00:24:20.740
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but it seems a kind of generalization you get from that
https://karpathy.ai/lexicap/0010-large.html#00:24:22.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that you don't get in our current methods so far.
https://karpathy.ai/lexicap/0010-large.html#00:24:24.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So I just talked to Vladimir Vapnik, for example,
https://karpathy.ai/lexicap/0010-large.html#00:24:27.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
who's a statistician of statistical learning,
https://karpathy.ai/lexicap/0010-large.html#00:24:30.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and he kind of dreams of creating
https://karpathy.ai/lexicap/0010-large.html#00:24:34.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the E equals MC squared for learning, right?
https://karpathy.ai/lexicap/0010-large.html#00:24:37.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
The general theory of learning.
https://karpathy.ai/lexicap/0010-large.html#00:24:41.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Do you think that's a fruitless pursuit
https://karpathy.ai/lexicap/0010-large.html#00:24:42.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in the near term, within the next several decades?
https://karpathy.ai/lexicap/0010-large.html#00:24:44.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I think that's a really interesting pursuit
https://karpathy.ai/lexicap/0010-large.html#00:24:51.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in the following sense, in that there is a lot of evidence
https://karpathy.ai/lexicap/0010-large.html#00:24:53.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that the brain is pretty modular.
https://karpathy.ai/lexicap/0010-large.html#00:24:58.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so I wouldn't maybe think of it as the theory,
https://karpathy.ai/lexicap/0010-large.html#00:25:03.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
maybe the underlying theory, but more kind of the principle
https://karpathy.ai/lexicap/0010-large.html#00:25:05.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
where there have been findings where
https://karpathy.ai/lexicap/0010-large.html#00:25:10.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
people who are blind will use the part of the brain
https://karpathy.ai/lexicap/0010-large.html#00:25:12.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
usually used for vision for other functions.
https://karpathy.ai/lexicap/0010-large.html#00:25:16.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And even after some kind of,
https://karpathy.ai/lexicap/0010-large.html#00:25:21.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if people get rewired in some way,
https://karpathy.ai/lexicap/0010-large.html#00:25:24.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
they might be able to reuse parts of their brain
https://karpathy.ai/lexicap/0010-large.html#00:25:26.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
for other functions.
https://karpathy.ai/lexicap/0010-large.html#00:25:28.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so what that suggests is some kind of modularity.
https://karpathy.ai/lexicap/0010-large.html#00:25:30.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And I think it is a pretty natural thing to strive for
https://karpathy.ai/lexicap/0010-large.html#00:25:35.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to see, can we find that modularity?
https://karpathy.ai/lexicap/0010-large.html#00:25:39.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Can we find this thing?
https://karpathy.ai/lexicap/0010-large.html#00:25:41.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Of course, every part of the brain is not exactly the same.
https://karpathy.ai/lexicap/0010-large.html#00:25:43.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Not everything can be rewired arbitrarily.
https://karpathy.ai/lexicap/0010-large.html#00:25:45.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But if you think of things like the neocortex,
https://karpathy.ai/lexicap/0010-large.html#00:25:48.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
which is a pretty big part of the brain,
https://karpathy.ai/lexicap/0010-large.html#00:25:50.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that seems fairly modular from what the findings so far.
https://karpathy.ai/lexicap/0010-large.html#00:25:52.300
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Can you design something equally modular?
https://karpathy.ai/lexicap/0010-large.html#00:25:56.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And if you can just grow it,
https://karpathy.ai/lexicap/0010-large.html#00:25:59.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
it becomes more capable probably.
https://karpathy.ai/lexicap/0010-large.html#00:26:00.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I think that would be the kind of interesting
https://karpathy.ai/lexicap/0010-large.html#00:26:02.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
underlying principle to shoot for that is not unrealistic.
https://karpathy.ai/lexicap/0010-large.html#00:26:04.940
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Do you think you prefer math or empirical trial and error
https://karpathy.ai/lexicap/0010-large.html#00:26:09.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
for the discovery of the essence of what it means
https://karpathy.ai/lexicap/0010-large.html#00:26:15.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to do something intelligent?
https://karpathy.ai/lexicap/0010-large.html#00:26:17.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So reinforcement learning embodies both groups, right?
https://karpathy.ai/lexicap/0010-large.html#00:26:19.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
To prove that something converges, prove the bounds.
https://karpathy.ai/lexicap/0010-large.html#00:26:22.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And then at the same time, a lot of those successes are,
https://karpathy.ai/lexicap/0010-large.html#00:26:26.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
well, let's try this and see if it works.
https://karpathy.ai/lexicap/0010-large.html#00:26:29.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So which do you gravitate towards?
https://karpathy.ai/lexicap/0010-large.html#00:26:31.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
How do you think of those two parts of your brain?
https://karpathy.ai/lexicap/0010-large.html#00:26:33.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Maybe I would prefer we could make the progress
https://karpathy.ai/lexicap/0010-large.html#00:26:39.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
with mathematics.
https://karpathy.ai/lexicap/0010-large.html#00:26:44.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And the reason maybe I would prefer that is because often
https://karpathy.ai/lexicap/0010-large.html#00:26:45.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if you have something you can mathematically formalize,
https://karpathy.ai/lexicap/0010-large.html#00:26:48.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you can leapfrog a lot of experimentation.
https://karpathy.ai/lexicap/0010-large.html#00:26:52.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And experimentation takes a long time to get through.
https://karpathy.ai/lexicap/0010-large.html#00:26:55.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And a lot of trial and error,
https://karpathy.ai/lexicap/0010-large.html#00:26:58.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
kind of reinforcement learning, your research process,
https://karpathy.ai/lexicap/0010-large.html#00:27:01.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but you need to do a lot of trial and error
https://karpathy.ai/lexicap/0010-large.html#00:27:04.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
before you get to a success.
https://karpathy.ai/lexicap/0010-large.html#00:27:05.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So if you can leapfrog that, to my mind,
https://karpathy.ai/lexicap/0010-large.html#00:27:06.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that's what the math is about.
https://karpathy.ai/lexicap/0010-large.html#00:27:08.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And hopefully once you do a bunch of experiments,
https://karpathy.ai/lexicap/0010-large.html#00:27:10.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you start seeing a pattern.
https://karpathy.ai/lexicap/0010-large.html#00:27:13.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
You can do some derivations that leapfrog some experiments.
https://karpathy.ai/lexicap/0010-large.html#00:27:14.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But I agree with you.
https://karpathy.ai/lexicap/0010-large.html#00:27:18.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I mean, in practice, a lot of the progress has been such
https://karpathy.ai/lexicap/0010-large.html#00:27:19.160