episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
this linear controller here.
https://karpathy.ai/lexicap/0010-large.html#00:13:42.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Neural network learns to tile the space
https://karpathy.ai/lexicap/0010-large.html#00:13:43.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and say linear controller here,
https://karpathy.ai/lexicap/0010-large.html#00:13:45.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
another linear controller here,
https://karpathy.ai/lexicap/0010-large.html#00:13:46.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but it's more subtle than that.
https://karpathy.ai/lexicap/0010-large.html#00:13:48.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so it's benefiting from this linear control aspect,
https://karpathy.ai/lexicap/0010-large.html#00:13:50.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
it's benefiting from the tiling,
https://karpathy.ai/lexicap/0010-large.html#00:13:52.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but it's somehow tiling it one dimension at a time.
https://karpathy.ai/lexicap/0010-large.html#00:13:53.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Because if let's say you have a two layer network,
https://karpathy.ai/lexicap/0010-large.html#00:13:57.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
if in that hidden layer, you make a transition
https://karpathy.ai/lexicap/0010-large.html#00:13:59.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
from active to inactive or the other way around,
https://karpathy.ai/lexicap/0010-large.html#00:14:03.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that is essentially one axis, but not axis aligned,
https://karpathy.ai/lexicap/0010-large.html#00:14:06.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but one direction that you change.
https://karpathy.ai/lexicap/0010-large.html#00:14:09.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so you have this kind of very gradual tiling
https://karpathy.ai/lexicap/0010-large.html#00:14:12.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
of the space where you have a lot of sharing
https://karpathy.ai/lexicap/0010-large.html#00:14:14.780
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
between the linear controllers that tile the space.
https://karpathy.ai/lexicap/0010-large.html#00:14:16.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that was always my intuition as to why
https://karpathy.ai/lexicap/0010-large.html#00:14:19.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to expect that this might work pretty well.
https://karpathy.ai/lexicap/0010-large.html#00:14:21.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
It's essentially leveraging the fact
https://karpathy.ai/lexicap/0010-large.html#00:14:24.820
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that linear feedback control is so good,
https://karpathy.ai/lexicap/0010-large.html#00:14:26.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but of course not enough.
https://karpathy.ai/lexicap/0010-large.html#00:14:28.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And this is a gradual tiling of the space
https://karpathy.ai/lexicap/0010-large.html#00:14:29.880
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
with linear feedback controls
https://karpathy.ai/lexicap/0010-large.html#00:14:31.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that share a lot of expertise across them.
https://karpathy.ai/lexicap/0010-large.html#00:14:33.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So that's really nice intuition,
https://karpathy.ai/lexicap/0010-large.html#00:14:36.620
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but do you think that scales to the more
https://karpathy.ai/lexicap/0010-large.html#00:14:39.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and more general problems of when you start going up
https://karpathy.ai/lexicap/0010-large.html#00:14:41.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the number of dimensions when you start
https://karpathy.ai/lexicap/0010-large.html#00:14:44.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
going down in terms of how often
https://karpathy.ai/lexicap/0010-large.html#00:14:49.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you get a clean reward signal?
https://karpathy.ai/lexicap/0010-large.html#00:14:52.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Does that intuition carry forward to those crazier,
https://karpathy.ai/lexicap/0010-large.html#00:14:55.400
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
weirder worlds that we think of as the real world?
https://karpathy.ai/lexicap/0010-large.html#00:14:58.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So I think where things get really tricky
https://karpathy.ai/lexicap/0010-large.html#00:15:03.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in the real world compared to the things
https://karpathy.ai/lexicap/0010-large.html#00:15:08.040
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
we've looked at so far with great success
https://karpathy.ai/lexicap/0010-large.html#00:15:09.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in reinforcement learning is the time scales,
https://karpathy.ai/lexicap/0010-large.html#00:15:11.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
which takes us to an extreme.
https://karpathy.ai/lexicap/0010-large.html#00:15:17.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So when you think about the real world,
https://karpathy.ai/lexicap/0010-large.html#00:15:18.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I mean, I don't know, maybe some student
https://karpathy.ai/lexicap/0010-large.html#00:15:21.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
decided to do a PhD here, right?
https://karpathy.ai/lexicap/0010-large.html#00:15:24.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Okay, that's a decision.
https://karpathy.ai/lexicap/0010-large.html#00:15:26.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
That's a very high level decision.
https://karpathy.ai/lexicap/0010-large.html#00:15:28.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But if you think about their lives,
https://karpathy.ai/lexicap/0010-large.html#00:15:30.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I mean, any person's life,
https://karpathy.ai/lexicap/0010-large.html#00:15:32.680
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
it's a sequence of muscle fiber contractions
https://karpathy.ai/lexicap/0010-large.html#00:15:34.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and relaxations, and that's how you interact with the world.
https://karpathy.ai/lexicap/0010-large.html#00:15:37.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that's a very high frequency control thing,
https://karpathy.ai/lexicap/0010-large.html#00:15:40.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but it's ultimately what you do
https://karpathy.ai/lexicap/0010-large.html#00:15:42.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and how you affect the world,
https://karpathy.ai/lexicap/0010-large.html#00:15:44.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
until I guess we have brain readings
https://karpathy.ai/lexicap/0010-large.html#00:15:46.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and you can maybe do it slightly differently.
https://karpathy.ai/lexicap/0010-large.html#00:15:48.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
But typically that's how you affect the world.
https://karpathy.ai/lexicap/0010-large.html#00:15:49.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And the decision of doing a PhD is so abstract
https://karpathy.ai/lexicap/0010-large.html#00:15:52.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
relative to what you're actually doing in the world.
https://karpathy.ai/lexicap/0010-large.html#00:15:56.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And I think that's where credit assignment
https://karpathy.ai/lexicap/0010-large.html#00:15:59.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
becomes just completely beyond
https://karpathy.ai/lexicap/0010-large.html#00:16:01.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
what any current RL algorithm can do.
https://karpathy.ai/lexicap/0010-large.html#00:16:04.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And we need hierarchical reasoning
https://karpathy.ai/lexicap/0010-large.html#00:16:06.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
at a level that is just not available at all yet.
https://karpathy.ai/lexicap/0010-large.html#00:16:09.000
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Where do you think we can pick up hierarchical reasoning?
https://karpathy.ai/lexicap/0010-large.html#00:16:12.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
By which mechanisms?
https://karpathy.ai/lexicap/0010-large.html#00:16:14.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Yeah, so maybe let me highlight
https://karpathy.ai/lexicap/0010-large.html#00:16:16.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
what I think the limitations are
https://karpathy.ai/lexicap/0010-large.html#00:16:18.680
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
of what already was done 20, 30 years ago.
https://karpathy.ai/lexicap/0010-large.html#00:16:20.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
In fact, you'll find reasoning systems
https://karpathy.ai/lexicap/0010-large.html#00:16:26.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that reason over relatively long horizons,
https://karpathy.ai/lexicap/0010-large.html#00:16:27.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
but the problem is that they were not grounded
https://karpathy.ai/lexicap/0010-large.html#00:16:30.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
in the real world.
https://karpathy.ai/lexicap/0010-large.html#00:16:32.800
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So people would have to hand design
https://karpathy.ai/lexicap/0010-large.html#00:16:34.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
some kind of logical, dynamical descriptions of the world
https://karpathy.ai/lexicap/0010-large.html#00:16:39.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and that didn't tie into perception.
https://karpathy.ai/lexicap/0010-large.html#00:16:43.920
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so it didn't tie into real objects and so forth.
https://karpathy.ai/lexicap/0010-large.html#00:16:46.360
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so that was a big gap.
https://karpathy.ai/lexicap/0010-large.html#00:16:49.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Now with deep learning, we start having the ability
https://karpathy.ai/lexicap/0010-large.html#00:16:51.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to really see with sensors, process that
https://karpathy.ai/lexicap/0010-large.html#00:16:53.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and understand what's in the world.
https://karpathy.ai/lexicap/0010-large.html#00:16:59.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And so it's a good time to try
https://karpathy.ai/lexicap/0010-large.html#00:17:01.440
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to bring these things together.
https://karpathy.ai/lexicap/0010-large.html#00:17:02.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
I see a few ways of getting there.
https://karpathy.ai/lexicap/0010-large.html#00:17:04.960
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
One way to get there would be to say
https://karpathy.ai/lexicap/0010-large.html#00:17:06.480
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
deep learning can get bolted on somehow
https://karpathy.ai/lexicap/0010-large.html#00:17:08.160
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to some of these more traditional approaches.
https://karpathy.ai/lexicap/0010-large.html#00:17:10.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Now bolted on would probably mean
https://karpathy.ai/lexicap/0010-large.html#00:17:12.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
you need to do some kind of end to end training
https://karpathy.ai/lexicap/0010-large.html#00:17:14.120
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
where you say my deep learning processing
https://karpathy.ai/lexicap/0010-large.html#00:17:16.320
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
somehow leads to a representation
https://karpathy.ai/lexicap/0010-large.html#00:17:18.600
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that in term uses some kind of traditional
https://karpathy.ai/lexicap/0010-large.html#00:17:20.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
underlying dynamical systems that can be used for planning.
https://karpathy.ai/lexicap/0010-large.html#00:17:24.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
And that's, for example, the direction Aviv Tamar
https://karpathy.ai/lexicap/0010-large.html#00:17:29.840
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and Thanard Kuretach here have been pushing
https://karpathy.ai/lexicap/0010-large.html#00:17:32.280
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
with causal info again and of course other people too.
https://karpathy.ai/lexicap/0010-large.html#00:17:34.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
That's one way.
https://karpathy.ai/lexicap/0010-large.html#00:17:36.720
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Can we somehow force it into the form factor
https://karpathy.ai/lexicap/0010-large.html#00:17:38.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
that is amenable to reasoning?
https://karpathy.ai/lexicap/0010-large.html#00:17:41.080
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
Another direction we've been thinking about
https://karpathy.ai/lexicap/0010-large.html#00:17:43.760
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
for a long time and didn't make any progress on
https://karpathy.ai/lexicap/0010-large.html#00:17:46.520
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
was more information theoretic approaches.
https://karpathy.ai/lexicap/0010-large.html#00:17:50.200
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
So the idea there was that what it means
https://karpathy.ai/lexicap/0010-large.html#00:17:53.640
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
to take high level action is to take
https://karpathy.ai/lexicap/0010-large.html#00:17:56.560
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
and choose a latent variable now
https://karpathy.ai/lexicap/0010-large.html#00:17:59.960