episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
this is how we conceive of the problem. And it's the wrong problem because we cannot specify
https://karpathy.ai/lexicap/0009-large.html#00:42:59.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
with certainty the correct objective, right? We need uncertainty, we need the machine to be
https://karpathy.ai/lexicap/0009-large.html#00:43:07.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
uncertain about what it is that it's supposed to be maximising.
https://karpathy.ai/lexicap/0009-large.html#00:43:13.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Favourite idea of yours, I've heard you say somewhere, well, I shouldn't pick favourites,
https://karpathy.ai/lexicap/0009-large.html#00:43:18.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
but it just sounds beautiful, we need to teach machines humility. It's a beautiful way to put it,
https://karpathy.ai/lexicap/0009-large.html#00:43:23.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I love it.
https://karpathy.ai/lexicap/0009-large.html#00:43:31.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
That they're humble, they know that they don't know what it is they're supposed to be doing,
https://karpathy.ai/lexicap/0009-large.html#00:43:32.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and that those objectives, I mean, they exist, they're within us, but we may not be able to
https://karpathy.ai/lexicap/0009-large.html#00:43:39.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we may not be able to explicate them, we may not even know how we want our future to go.
https://karpathy.ai/lexicap/0009-large.html#00:43:47.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Exactly.
https://karpathy.ai/lexicap/0009-large.html#00:43:56.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And the machine, a machine that's uncertain is going to be deferential to us. So if we say,
https://karpathy.ai/lexicap/0009-large.html#00:43:58.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
don't do that, well, now the machines learn something a bit more about our true objectives,
https://karpathy.ai/lexicap/0009-large.html#00:44:06.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
because something that it thought was reasonable in pursuit of our objective,
https://karpathy.ai/lexicap/0009-large.html#00:44:11.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
turns out not to be, so now it's learned something. So it's going to defer because
https://karpathy.ai/lexicap/0009-large.html#00:44:16.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
it wants to be doing what we really want. And that point, I think, is absolutely central
https://karpathy.ai/lexicap/0009-large.html#00:44:20.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to solving the control problem. And it's a different kind of AI when you take away this
https://karpathy.ai/lexicap/0009-large.html#00:44:30.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
idea that the objective is known, then in fact, a lot of the theoretical frameworks that we're so
https://karpathy.ai/lexicap/0009-large.html#00:44:37.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
familiar with, you know, Markov decision processes, goal based planning, you know,
https://karpathy.ai/lexicap/0009-large.html#00:44:44.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
standard games research, all of these techniques actually become inapplicable.
https://karpathy.ai/lexicap/0009-large.html#00:44:53.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And you get a more complicated problem because now the interaction with the human becomes part
https://karpathy.ai/lexicap/0009-large.html#00:44:59.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
of the problem. Because the human by making choices is giving you more information about
https://karpathy.ai/lexicap/0009-large.html#00:45:11.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the true objective and that information helps you achieve the objective better.
https://karpathy.ai/lexicap/0009-large.html#00:45:21.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And so that really means that you're mostly dealing with game theoretic problems where
https://karpathy.ai/lexicap/0009-large.html#00:45:26.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you've got the machine and the human and they're coupled together,
https://karpathy.ai/lexicap/0009-large.html#00:45:31.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
rather than a machine going off by itself with a fixed objective.
https://karpathy.ai/lexicap/0009-large.html#00:45:35.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
LW. Which is fascinating on the machine and the human level that we, when you don't have an
https://karpathy.ai/lexicap/0009-large.html#00:45:39.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
objective, means you're together coming up with an objective. I mean, there's a lot of philosophy
https://karpathy.ai/lexicap/0009-large.html#00:45:46.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that, you know, you could argue that life doesn't really have meaning. We together agree on what
https://karpathy.ai/lexicap/0009-large.html#00:45:53.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
gives it meaning and we kind of culturally create things that give why the heck we are on this earth
https://karpathy.ai/lexicap/0009-large.html#00:45:58.880
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
anyway. We together as a society create that meaning and you have to learn that objective.
https://karpathy.ai/lexicap/0009-large.html#00:46:05.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And one of the biggest, I thought that's where you were going to go for a second,
https://karpathy.ai/lexicap/0009-large.html#00:46:11.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
one of the biggest troubles we run into outside of statistics and machine learning and AI
https://karpathy.ai/lexicap/0009-large.html#00:46:15.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and just human civilization is when you look at, I came from, I was born in the Soviet Union
https://karpathy.ai/lexicap/0009-large.html#00:46:21.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and the history of the 20th century, we ran into the most trouble, us humans, when there was a
https://karpathy.ai/lexicap/0009-large.html#00:46:28.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
certainty about the objective and you do whatever it takes to achieve that objective, whether you're
https://karpathy.ai/lexicap/0009-large.html#00:46:36.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
talking about Germany or communist Russia. You get into trouble with humans.
https://karpathy.ai/lexicap/0009-large.html#00:46:41.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I would say with, you know, corporations, in fact, some people argue that, you know,
https://karpathy.ai/lexicap/0009-large.html#00:46:47.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we don't have to look forward to a time when AI systems take over the world. They already have
https://karpathy.ai/lexicap/0009-large.html#00:46:52.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and they call corporations, right? That corporations happen to be using people as
https://karpathy.ai/lexicap/0009-large.html#00:46:57.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
components right now, but they are effectively algorithmic machines and they're optimizing
https://karpathy.ai/lexicap/0009-large.html#00:47:03.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
an objective, which is quarterly profit that isn't aligned with overall wellbeing of the human race.
https://karpathy.ai/lexicap/0009-large.html#00:47:10.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And they are destroying the world. They are primarily responsible for our inability to tackle
https://karpathy.ai/lexicap/0009-large.html#00:47:17.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
climate change. So I think that's one way of thinking about what's going on with corporations,
https://karpathy.ai/lexicap/0009-large.html#00:47:23.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
but I think the point you're making is valid that there are many systems in the real world where
https://karpathy.ai/lexicap/0009-large.html#00:47:30.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we've sort of prematurely fixed on the objective and then decoupled the machine from those that's
https://karpathy.ai/lexicap/0009-large.html#00:47:39.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
supposed to be serving. And I think you see this with government, right? Government is supposed to
https://karpathy.ai/lexicap/0009-large.html#00:47:48.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
be a machine that serves people, but instead it tends to be taken over by people who have their
https://karpathy.ai/lexicap/0009-large.html#00:47:54.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
own objective and use government to optimize that objective regardless of what people want.
https://karpathy.ai/lexicap/0009-large.html#00:48:02.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Do you find appealing the idea of almost arguing machines where you have multiple AI systems with
https://karpathy.ai/lexicap/0009-large.html#00:48:09.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
a clear fixed objective. We have in government, the red team and the blue team, they're very fixed on
https://karpathy.ai/lexicap/0009-large.html#00:48:16.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
their objectives and they argue and they kind of may disagree, but it kind of seems to make it
https://karpathy.ai/lexicap/0009-large.html#00:48:22.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
work somewhat that the duality of it. Okay. Let's go a hundred years back when there was still was
https://karpathy.ai/lexicap/0009-large.html#00:48:29.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
going on or at the founding of this country, there was disagreements and that disagreement is where,
https://karpathy.ai/lexicap/0009-large.html#00:48:39.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
so it was a balance between certainty and forced humility because the power was distributed.
https://karpathy.ai/lexicap/0009-large.html#00:48:46.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah. I think that the nature of debate and disagreement argument takes as a premise,
https://karpathy.ai/lexicap/0009-large.html#00:48:53.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the idea that you could be wrong, which means that you're not necessarily absolutely convinced
https://karpathy.ai/lexicap/0009-large.html#00:49:04.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that your objective is the correct one. If you were absolutely convinced, there'd be no point
https://karpathy.ai/lexicap/0009-large.html#00:49:12.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in having any discussion or argument because you would never change your mind and there wouldn't
https://karpathy.ai/lexicap/0009-large.html#00:49:19.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
be any sort of synthesis or anything like that. I think you can think of argumentation as an
https://karpathy.ai/lexicap/0009-large.html#00:49:24.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
implementation of a form of uncertain reasoning. I've been reading recently about utilitarianism
https://karpathy.ai/lexicap/0009-large.html#00:49:32.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and the history of efforts to define in a sort of clear mathematical way,
https://karpathy.ai/lexicap/0009-large.html#00:49:44.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
if you like a formula for moral or political decision making. It's really interesting that
https://karpathy.ai/lexicap/0009-large.html#00:49:53.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the parallels between the philosophical discussions going back 200 years and what you see now in
https://karpathy.ai/lexicap/0009-large.html#00:50:00.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
discussions about existential risk because it's almost exactly the same. Someone would say,
https://karpathy.ai/lexicap/0009-large.html#00:50:07.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
okay, well here's a formula for how we should make decisions. Utilitarianism is roughly each
https://karpathy.ai/lexicap/0009-large.html#00:50:14.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
person has a utility function and then we make decisions to maximize the sum of everybody's
https://karpathy.ai/lexicap/0009-large.html#00:50:20.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
utility. Then people point out, well, in that case, the best policy is one that leads to
https://karpathy.ai/lexicap/0009-large.html#00:50:27.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the enormously vast population, all of whom are living a life that's barely worth living.
https://karpathy.ai/lexicap/0009-large.html#00:50:36.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
This is called the repugnant conclusion. Another version is that we should maximize
https://karpathy.ai/lexicap/0009-large.html#00:50:44.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
pleasure and that's what we mean by utility. Then you'll get people effectively saying, well,
https://karpathy.ai/lexicap/0009-large.html#00:50:50.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in that case, we might as well just have everyone hooked up to a heroin drip. They didn't use those
https://karpathy.ai/lexicap/0009-large.html#00:50:57.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
words, but that debate was happening in the 19th century as it is now about AI that if we get the
https://karpathy.ai/lexicap/0009-large.html#00:51:03.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
formula wrong, we're going to have AI systems working towards an outcome that in retrospect
https://karpathy.ai/lexicap/0009-large.html#00:51:11.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
would be exactly wrong. Do you think there's, as beautifully put, so the echoes are there,
https://karpathy.ai/lexicap/0009-large.html#00:51:20.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
but do you think, I mean, if you look at Sam Harris, our imagination worries about the AI
https://karpathy.ai/lexicap/0009-large.html#00:51:26.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
version of that because of the speed at which the things going wrong in the utilitarian context
https://karpathy.ai/lexicap/0009-large.html#00:51:32.880
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
could happen. Is that a worry for you? Yeah. I think that in most cases, not in all, but if we
https://karpathy.ai/lexicap/0009-large.html#00:51:44.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
have a wrong political idea, we see it starting to go wrong and we're not completely stupid and so
https://karpathy.ai/lexicap/0009-large.html#00:51:53.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we say, okay, maybe that was a mistake. Let's try something different. Also, we're very slow and
https://karpathy.ai/lexicap/0009-large.html#00:52:00.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
inefficient about implementing these things and so on. So you have to worry when you have
https://karpathy.ai/lexicap/0009-large.html#00:52:09.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
corporations or political systems that are extremely efficient. But when we look at AI systems
https://karpathy.ai/lexicap/0009-large.html#00:52:14.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
or even just computers in general, they have this different characteristic from ordinary
https://karpathy.ai/lexicap/0009-large.html#00:52:22.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
human activity in the past. So let's say you were a surgeon, you had some idea about how to do some
https://karpathy.ai/lexicap/0009-large.html#00:52:29.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
operation. Well, and let's say you were wrong, that way of doing the operation would mostly
https://karpathy.ai/lexicap/0009-large.html#00:52:36.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
kill the patient. Well, you'd find out pretty quickly, like after three, maybe three or four
https://karpathy.ai/lexicap/0009-large.html#00:52:42.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
tries. But that isn't true for pharmaceutical companies because they don't do three or four
https://karpathy.ai/lexicap/0009-large.html#00:52:49.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
operations. They manufacture three or four billion pills and they sell them and then they find out
https://karpathy.ai/lexicap/0009-large.html#00:53:00.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
maybe six months or a year later that, oh, people are dying of heart attacks or getting cancer from
https://karpathy.ai/lexicap/0009-large.html#00:53:05.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
this drug. And so that's why we have the FDA, right? Because of the scalability of pharmaceutical
https://karpathy.ai/lexicap/0009-large.html#00:53:11.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
production. And there have been some unbelievably bad episodes in the history of pharmaceuticals
https://karpathy.ai/lexicap/0009-large.html#00:53:18.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and adulteration of products and so on that have killed tens of thousands or paralyzed hundreds
https://karpathy.ai/lexicap/0009-large.html#00:53:29.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
of thousands of people. Now with computers, we have that same scalability problem that you can
https://karpathy.ai/lexicap/0009-large.html#00:53:36.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
sit there and type for I equals one to five billion do, right? And all of a sudden you're
https://karpathy.ai/lexicap/0009-large.html#00:53:43.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
having an impact on a global scale. And yet we have no FDA, right? There's absolutely no controls
https://karpathy.ai/lexicap/0009-large.html#00:53:49.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
at all over what a bunch of undergraduates with too much caffeine can do to the world.
https://karpathy.ai/lexicap/0009-large.html#00:53:56.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And we look at what happened with Facebook, well, social media in general and click through
https://karpathy.ai/lexicap/0009-large.html#00:54:03.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
optimization. So you have a simple feedback algorithm that's trying to just optimize click
https://karpathy.ai/lexicap/0009-large.html#00:54:10.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
through, right? That sounds reasonable, right? Because you don't want to be feeding people ads
https://karpathy.ai/lexicap/0009-large.html#00:54:18.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that they don't care about or not interested in. And you might even think of that process as
https://karpathy.ai/lexicap/0009-large.html#00:54:24.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
simply adjusting the feeding of ads or news articles or whatever it might be
https://karpathy.ai/lexicap/0009-large.html#00:54:33.840