episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and then the robot figured out
https://karpathy.ai/lexicap/0010-large.html#00:09:34.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
what the person was actually after was a backflip.
https://karpathy.ai/lexicap/0010-large.html#00:09:36.020
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And I'd imagine the same would be true
https://karpathy.ai/lexicap/0010-large.html#00:09:38.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
for things like more interactive robots,
https://karpathy.ai/lexicap/0010-large.html#00:09:40.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that the robot would figure out over time,
https://karpathy.ai/lexicap/0010-large.html#00:09:43.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
oh, this kind of thing apparently is appreciated more
https://karpathy.ai/lexicap/0010-large.html#00:09:45.100
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
than this other kind of thing.
https://karpathy.ai/lexicap/0010-large.html#00:09:48.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So when I first picked up Sutton's,
https://karpathy.ai/lexicap/0010-large.html#00:09:50.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Richard Sutton's reinforcement learning book,
https://karpathy.ai/lexicap/0010-large.html#00:09:54.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
before sort of this deep learning,
https://karpathy.ai/lexicap/0010-large.html#00:09:56.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
before the reemergence of neural networks
https://karpathy.ai/lexicap/0010-large.html#00:10:01.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
as a powerful mechanism for machine learning,
https://karpathy.ai/lexicap/0010-large.html#00:10:03.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
RL seemed to me like magic.
https://karpathy.ai/lexicap/0010-large.html#00:10:05.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
It was beautiful.
https://karpathy.ai/lexicap/0010-large.html#00:10:08.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So that seemed like what intelligence is,
https://karpathy.ai/lexicap/0010-large.html#00:10:10.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
RL reinforcement learning.
https://karpathy.ai/lexicap/0010-large.html#00:10:13.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So how do you think we can possibly learn anything
https://karpathy.ai/lexicap/0010-large.html#00:10:15.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
about the world when the reward for the actions
https://karpathy.ai/lexicap/0010-large.html#00:10:20.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
is delayed, is so sparse?
https://karpathy.ai/lexicap/0010-large.html#00:10:22.980
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Like where is, why do you think RL works?
https://karpathy.ai/lexicap/0010-large.html#00:10:25.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Why do you think you can learn anything
https://karpathy.ai/lexicap/0010-large.html#00:10:30.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
under such sparse rewards,
https://karpathy.ai/lexicap/0010-large.html#00:10:32.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
whether it's regular reinforcement learning
https://karpathy.ai/lexicap/0010-large.html#00:10:35.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
or deep reinforcement learning?
https://karpathy.ai/lexicap/0010-large.html#00:10:36.880
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
What's your intuition?
https://karpathy.ai/lexicap/0010-large.html#00:10:38.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
The counterpart of that is why is RL,
https://karpathy.ai/lexicap/0010-large.html#00:10:40.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
why does it need so many samples,
https://karpathy.ai/lexicap/0010-large.html#00:10:44.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
so many experiences to learn from?
https://karpathy.ai/lexicap/0010-large.html#00:10:47.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Because really what's happening is
https://karpathy.ai/lexicap/0010-large.html#00:10:49.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
when you have a sparse reward,
https://karpathy.ai/lexicap/0010-large.html#00:10:50.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you do something maybe for like, I don't know,
https://karpathy.ai/lexicap/0010-large.html#00:10:53.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you take 100 actions and then you get a reward.
https://karpathy.ai/lexicap/0010-large.html#00:10:55.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And maybe you get like a score of three.
https://karpathy.ai/lexicap/0010-large.html#00:10:57.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And I'm like okay, three, not sure what that means.
https://karpathy.ai/lexicap/0010-large.html#00:10:59.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
You go again and now you get two.
https://karpathy.ai/lexicap/0010-large.html#00:11:03.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And now you know that that sequence of 100 actions
https://karpathy.ai/lexicap/0010-large.html#00:11:05.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that you did the second time around
https://karpathy.ai/lexicap/0010-large.html#00:11:07.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
somehow was worse than the sequence of 100 actions
https://karpathy.ai/lexicap/0010-large.html#00:11:08.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you did the first time around.
https://karpathy.ai/lexicap/0010-large.html#00:11:10.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But that's tough to now know which one of those
https://karpathy.ai/lexicap/0010-large.html#00:11:11.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
were better or worse.
https://karpathy.ai/lexicap/0010-large.html#00:11:14.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Some might have been good and bad in either one.
https://karpathy.ai/lexicap/0010-large.html#00:11:15.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so that's why it needs so many experiences.
https://karpathy.ai/lexicap/0010-large.html#00:11:17.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But once you have enough experiences,
https://karpathy.ai/lexicap/0010-large.html#00:11:19.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
effectively RL is teasing that apart.
https://karpathy.ai/lexicap/0010-large.html#00:11:21.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
It's trying to say okay, what is consistently there
https://karpathy.ai/lexicap/0010-large.html#00:11:23.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
when you get a higher reward
https://karpathy.ai/lexicap/0010-large.html#00:11:26.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and what's consistently there when you get a lower reward?
https://karpathy.ai/lexicap/0010-large.html#00:11:27.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And then kind of the magic of sometimes
https://karpathy.ai/lexicap/0010-large.html#00:11:30.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the policy gradient update is to say
https://karpathy.ai/lexicap/0010-large.html#00:11:32.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
now let's update the neural network
https://karpathy.ai/lexicap/0010-large.html#00:11:34.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to make the actions that were kind of present
https://karpathy.ai/lexicap/0010-large.html#00:11:37.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
when things are good more likely
https://karpathy.ai/lexicap/0010-large.html#00:11:39.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and make the actions that are present
https://karpathy.ai/lexicap/0010-large.html#00:11:41.460
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
when things are not as good less likely.
https://karpathy.ai/lexicap/0010-large.html#00:11:43.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So that is the counterpoint,
https://karpathy.ai/lexicap/0010-large.html#00:11:45.140
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but it seems like you would need to run it
https://karpathy.ai/lexicap/0010-large.html#00:11:47.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
a lot more than you do.
https://karpathy.ai/lexicap/0010-large.html#00:11:49.540
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Even though right now people could say
https://karpathy.ai/lexicap/0010-large.html#00:11:50.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that RL is very inefficient,
https://karpathy.ai/lexicap/0010-large.html#00:11:52.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but it seems to be way more efficient
https://karpathy.ai/lexicap/0010-large.html#00:11:54.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
than one would imagine on paper.
https://karpathy.ai/lexicap/0010-large.html#00:11:56.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
That the simple updates to the policy,
https://karpathy.ai/lexicap/0010-large.html#00:11:58.880
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the policy gradient, that somehow you can learn,
https://karpathy.ai/lexicap/0010-large.html#00:12:02.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
exactly you just said, what are the common actions
https://karpathy.ai/lexicap/0010-large.html#00:12:04.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that seem to produce some good results?
https://karpathy.ai/lexicap/0010-large.html#00:12:07.740
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
That that somehow can learn anything.
https://karpathy.ai/lexicap/0010-large.html#00:12:09.820
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
It seems counterintuitive at least.
https://karpathy.ai/lexicap/0010-large.html#00:12:12.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Is there some intuition behind it?
https://karpathy.ai/lexicap/0010-large.html#00:12:15.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Yeah, so I think there's a few ways to think about this.
https://karpathy.ai/lexicap/0010-large.html#00:12:16.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
The way I tend to think about it mostly originally,
https://karpathy.ai/lexicap/0010-large.html#00:12:21.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
so when we started working on deep reinforcement learning
https://karpathy.ai/lexicap/0010-large.html#00:12:26.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
here at Berkeley, which was maybe 2011, 12, 13,
https://karpathy.ai/lexicap/0010-large.html#00:12:29.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
around that time, John Schulman was a PhD student
https://karpathy.ai/lexicap/0010-large.html#00:12:32.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
initially kind of driving it forward here.
https://karpathy.ai/lexicap/0010-large.html#00:12:36.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And the way we thought about it at the time was
https://karpathy.ai/lexicap/0010-large.html#00:12:39.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if you think about rectified linear units
https://karpathy.ai/lexicap/0010-large.html#00:12:44.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
or kind of rectifier type neural networks,
https://karpathy.ai/lexicap/0010-large.html#00:12:47.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
what do you get?
https://karpathy.ai/lexicap/0010-large.html#00:12:50.240
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
You get something that's piecewise linear feedback control.
https://karpathy.ai/lexicap/0010-large.html#00:12:51.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And if you look at the literature,
https://karpathy.ai/lexicap/0010-large.html#00:12:55.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
linear feedback control is extremely successful,
https://karpathy.ai/lexicap/0010-large.html#00:12:57.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
can solve many, many problems surprisingly well.
https://karpathy.ai/lexicap/0010-large.html#00:12:59.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I remember, for example, when we did helicopter flight,
https://karpathy.ai/lexicap/0010-large.html#00:13:03.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if you're in a stationary flight regime,
https://karpathy.ai/lexicap/0010-large.html#00:13:05.700
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
not a non stationary, but a stationary flight regime
https://karpathy.ai/lexicap/0010-large.html#00:13:07.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
like hover, you can use linear feedback control
https://karpathy.ai/lexicap/0010-large.html#00:13:10.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to stabilize a helicopter, very complex dynamical system,
https://karpathy.ai/lexicap/0010-large.html#00:13:12.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but the controller is relatively simple.
https://karpathy.ai/lexicap/0010-large.html#00:13:15.580
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so I think that's a big part of it is that
https://karpathy.ai/lexicap/0010-large.html#00:13:18.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if you do feedback control, even though the system
https://karpathy.ai/lexicap/0010-large.html#00:13:20.660
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you control can be very, very complex,
https://karpathy.ai/lexicap/0010-large.html#00:13:23.220
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
often relatively simple control architectures
https://karpathy.ai/lexicap/0010-large.html#00:13:25.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
can already do a lot.
https://karpathy.ai/lexicap/0010-large.html#00:13:28.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But then also just linear is not good enough.
https://karpathy.ai/lexicap/0010-large.html#00:13:30.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so one way you can think of these neural networks
https://karpathy.ai/lexicap/0010-large.html#00:13:32.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
is that sometimes they tile the space,
https://karpathy.ai/lexicap/0010-large.html#00:13:35.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
which people were already trying to do more by hand
https://karpathy.ai/lexicap/0010-large.html#00:13:37.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
or with finite state machines,
https://karpathy.ai/lexicap/0010-large.html#00:13:39.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
say this linear controller here,
https://karpathy.ai/lexicap/0010-large.html#00:13:41.000