episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and the robot learns to translate it
https://karpathy.ai/lexicap/0010-large.html#00:31:11.340
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
into what it means for the robot to do it.
https://karpathy.ai/lexicap/0010-large.html#00:31:12.820
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that was a meta learning formulation,
https://karpathy.ai/lexicap/0010-large.html#00:31:15.900
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
learn from one to get the other.
https://karpathy.ai/lexicap/0010-large.html#00:31:17.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that, I think, opens up a lot of opportunities
https://karpathy.ai/lexicap/0010-large.html#00:31:20.380
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to learn a lot more quickly.
https://karpathy.ai/lexicap/0010-large.html#00:31:23.020
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So my focus is on autonomous vehicles.
https://karpathy.ai/lexicap/0010-large.html#00:31:24.540
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Do you think this approach of third person watching,
https://karpathy.ai/lexicap/0010-large.html#00:31:26.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the autonomous driving is amenable
https://karpathy.ai/lexicap/0010-large.html#00:31:29.940
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to this kind of approach?
https://karpathy.ai/lexicap/0010-large.html#00:31:31.980
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So for autonomous driving,
https://karpathy.ai/lexicap/0010-large.html#00:31:33.860
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I would say third person is slightly easier.
https://karpathy.ai/lexicap/0010-large.html#00:31:36.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And the reason I'm gonna say it's slightly easier
https://karpathy.ai/lexicap/0010-large.html#00:31:41.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to do with third person is because
https://karpathy.ai/lexicap/0010-large.html#00:31:43.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the car dynamics are very well understood.
https://karpathy.ai/lexicap/0010-large.html#00:31:46.620
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So the...
https://karpathy.ai/lexicap/0010-large.html#00:31:49.540
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Easier than first person, you mean?
https://karpathy.ai/lexicap/0010-large.html#00:31:51.020
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Or easier than...
https://karpathy.ai/lexicap/0010-large.html#00:31:53.980
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So I think the distinction between third person
https://karpathy.ai/lexicap/0010-large.html#00:31:55.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and first person is not a very important distinction
https://karpathy.ai/lexicap/0010-large.html#00:31:57.540
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
for autonomous driving.
https://karpathy.ai/lexicap/0010-large.html#00:32:00.180
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
They're very similar.
https://karpathy.ai/lexicap/0010-large.html#00:32:01.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Because the distinction is really about
https://karpathy.ai/lexicap/0010-large.html#00:32:03.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
who turns the steering wheel.
https://karpathy.ai/lexicap/0010-large.html#00:32:06.100
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Or maybe, let me put it differently.
https://karpathy.ai/lexicap/0010-large.html#00:32:09.180
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
How to get from a point where you are now
https://karpathy.ai/lexicap/0010-large.html#00:32:12.340
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to a point, let's say, a couple meters in front of you.
https://karpathy.ai/lexicap/0010-large.html#00:32:14.860
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that's a problem that's very well understood.
https://karpathy.ai/lexicap/0010-large.html#00:32:17.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that's the only distinction
https://karpathy.ai/lexicap/0010-large.html#00:32:19.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
between third and first person there.
https://karpathy.ai/lexicap/0010-large.html#00:32:20.260
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Whereas with the robot manipulation,
https://karpathy.ai/lexicap/0010-large.html#00:32:21.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
interaction forces are very complex.
https://karpathy.ai/lexicap/0010-large.html#00:32:23.220
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And it's still a very different thing.
https://karpathy.ai/lexicap/0010-large.html#00:32:25.420
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
For autonomous driving,
https://karpathy.ai/lexicap/0010-large.html#00:32:27.980
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I think there is still the question,
https://karpathy.ai/lexicap/0010-large.html#00:32:29.940
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
imitation versus RL.
https://karpathy.ai/lexicap/0010-large.html#00:32:31.420
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So imitation gives you a lot more signal.
https://karpathy.ai/lexicap/0010-large.html#00:32:34.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I think where imitation is lacking
https://karpathy.ai/lexicap/0010-large.html#00:32:36.740
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and needs some extra machinery is,
https://karpathy.ai/lexicap/0010-large.html#00:32:38.900
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
it doesn't, in its normal format,
https://karpathy.ai/lexicap/0010-large.html#00:32:42.380
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
doesn't think about goals or objectives.
https://karpathy.ai/lexicap/0010-large.html#00:32:45.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And of course, there are versions of imitation learning
https://karpathy.ai/lexicap/0010-large.html#00:32:48.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and versus reinforcement learning type imitation learning
https://karpathy.ai/lexicap/0010-large.html#00:32:51.060
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
which also thinks about goals.
https://karpathy.ai/lexicap/0010-large.html#00:32:52.900
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I think then we're getting much closer.
https://karpathy.ai/lexicap/0010-large.html#00:32:54.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But I think it's very hard to think of a
https://karpathy.ai/lexicap/0010-large.html#00:32:57.100
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
fully reactive car, generalizing well.
https://karpathy.ai/lexicap/0010-large.html#00:32:59.620
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
If it really doesn't have a notion of objectives
https://karpathy.ai/lexicap/0010-large.html#00:33:04.060
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to generalize well to the kind of general
https://karpathy.ai/lexicap/0010-large.html#00:33:05.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that you would want.
https://karpathy.ai/lexicap/0010-large.html#00:33:08.540
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
You'd want more than just that reactivity
https://karpathy.ai/lexicap/0010-large.html#00:33:09.500
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that you get from just behavioral cloning
https://karpathy.ai/lexicap/0010-large.html#00:33:12.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
slash supervised learning.
https://karpathy.ai/lexicap/0010-large.html#00:33:13.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So a lot of the work,
https://karpathy.ai/lexicap/0010-large.html#00:33:17.100
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
whether it's self play or even imitation learning,
https://karpathy.ai/lexicap/0010-large.html#00:33:19.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
would benefit significantly from simulation,
https://karpathy.ai/lexicap/0010-large.html#00:33:22.060
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
from effective simulation.
https://karpathy.ai/lexicap/0010-large.html#00:33:24.860
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And you're doing a lot of stuff
https://karpathy.ai/lexicap/0010-large.html#00:33:26.540
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in the physical world and in simulation.
https://karpathy.ai/lexicap/0010-large.html#00:33:27.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Do you have hope for greater and greater
https://karpathy.ai/lexicap/0010-large.html#00:33:29.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
power of simulation being boundless eventually
https://karpathy.ai/lexicap/0010-large.html#00:33:33.620
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to where most of what we need to operate
https://karpathy.ai/lexicap/0010-large.html#00:33:38.380
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in the physical world could be simulated
https://karpathy.ai/lexicap/0010-large.html#00:33:40.740
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to a degree that's directly transferable
https://karpathy.ai/lexicap/0010-large.html#00:33:43.780
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to the physical world?
https://karpathy.ai/lexicap/0010-large.html#00:33:46.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Or are we still very far away from that?
https://karpathy.ai/lexicap/0010-large.html#00:33:47.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So I think we could even rephrase that question
https://karpathy.ai/lexicap/0010-large.html#00:33:51.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in some sense.
https://karpathy.ai/lexicap/0010-large.html#00:33:57.780
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Please.
https://karpathy.ai/lexicap/0010-large.html#00:33:58.780
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so the power of simulation, right?
https://karpathy.ai/lexicap/0010-large.html#00:34:00.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
As simulators get better and better,
https://karpathy.ai/lexicap/0010-large.html#00:34:04.940
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
of course, becomes stronger
https://karpathy.ai/lexicap/0010-large.html#00:34:06.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and we can learn more in simulation.
https://karpathy.ai/lexicap/0010-large.html#00:34:08.980
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But there's also another version
https://karpathy.ai/lexicap/0010-large.html#00:34:11.260
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
which is where you say the simulator
https://karpathy.ai/lexicap/0010-large.html#00:34:12.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
doesn't even have to be that precise.
https://karpathy.ai/lexicap/0010-large.html#00:34:13.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
As long as it's somewhat representative
https://karpathy.ai/lexicap/0010-large.html#00:34:15.900
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and instead of trying to get one simulator
https://karpathy.ai/lexicap/0010-large.html#00:34:18.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that is sufficiently precise to learn in
https://karpathy.ai/lexicap/0010-large.html#00:34:21.060
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and transfer really well to the real world,
https://karpathy.ai/lexicap/0010-large.html#00:34:23.140
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I'm gonna build many simulators.
https://karpathy.ai/lexicap/0010-large.html#00:34:25.300
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Ensemble of simulators?
https://karpathy.ai/lexicap/0010-large.html#00:34:27.100
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Ensemble of simulators.
https://karpathy.ai/lexicap/0010-large.html#00:34:28.260
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Not any single one of them is sufficiently representative
https://karpathy.ai/lexicap/0010-large.html#00:34:29.940
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
of the real world such that it would work
https://karpathy.ai/lexicap/0010-large.html#00:34:33.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if you train in there.
https://karpathy.ai/lexicap/0010-large.html#00:34:36.740
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But if you train in all of them,
https://karpathy.ai/lexicap/0010-large.html#00:34:37.900
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
then there is something that's good in all of them.
https://karpathy.ai/lexicap/0010-large.html#00:34:40.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
The real world will just be another one of them
https://karpathy.ai/lexicap/0010-large.html#00:34:43.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that's not identical to any one of them
https://karpathy.ai/lexicap/0010-large.html#00:34:47.620
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but just another one of them.
https://karpathy.ai/lexicap/0010-large.html#00:34:49.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Another sample from the distribution of simulators.
https://karpathy.ai/lexicap/0010-large.html#00:34:50.940
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Exactly.
https://karpathy.ai/lexicap/0010-large.html#00:34:53.180
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
We do live in a simulation,
https://karpathy.ai/lexicap/0010-large.html#00:34:54.020
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
so this is just one other one.
https://karpathy.ai/lexicap/0010-large.html#00:34:54.860
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I'm not sure about that, but yeah.
https://karpathy.ai/lexicap/0010-large.html#00:34:57.780
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
It's definitely a very advanced simulator if it is.
https://karpathy.ai/lexicap/0010-large.html#00:35:01.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Yeah, it's a pretty good one.
https://karpathy.ai/lexicap/0010-large.html#00:35:03.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I've talked to Stuart Russell.
https://karpathy.ai/lexicap/0010-large.html#00:35:05.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
It's something you think about a little bit too.
https://karpathy.ai/lexicap/0010-large.html#00:35:07.660