episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
there's this tension, there's this game. So if we study a lot of work with pedestrians,
https://karpathy.ai/lexicap/0009-large.html#00:30:58.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
if you approach pedestrians as purely an obstacle avoidance, so you're doing look ahead as in
https://karpathy.ai/lexicap/0009-large.html#00:31:04.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
modeling the intent that they're not going to, they're going to take advantage of you. They're
https://karpathy.ai/lexicap/0009-large.html#00:31:10.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
not going to respect you at all. There has to be a tension, a fear, some amount of uncertainty.
https://karpathy.ai/lexicap/0009-large.html#00:31:15.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
That's how we have created.
https://karpathy.ai/lexicap/0009-large.html#00:31:21.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Or at least just a kind of a resoluteness. You have to display a certain amount of
https://karpathy.ai/lexicap/0009-large.html#00:31:24.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
resoluteness. You can't be too tentative. And yeah, so the solutions then become
https://karpathy.ai/lexicap/0009-large.html#00:31:29.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
pretty complicated, right? You get into game theoretic analyses. And so at Berkeley now,
https://karpathy.ai/lexicap/0009-large.html#00:31:39.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we're working a lot on this kind of interaction between machines and humans.
https://karpathy.ai/lexicap/0009-large.html#00:31:46.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And that's exciting.
https://karpathy.ai/lexicap/0009-large.html#00:31:51.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And so my colleague, Ankur Dragan, actually, if you formulate the problem game theoretically,
https://karpathy.ai/lexicap/0009-large.html#00:31:53.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you just let the system figure out the solution. It does interesting unexpected things. Like
https://karpathy.ai/lexicap/0009-large.html#00:32:04.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
sometimes at a stop sign, if no one is going first, the car will actually back up a little,
https://karpathy.ai/lexicap/0009-large.html#00:32:10.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
right? And just to indicate to the other cars that they should go. And that's something it
https://karpathy.ai/lexicap/0009-large.html#00:32:18.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
invented entirely by itself. We didn't say this is the language of communication at stop signs.
https://karpathy.ai/lexicap/0009-large.html#00:32:23.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
It figured it out.
https://karpathy.ai/lexicap/0009-large.html#00:32:29.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
That's really interesting. So let me one just step back for a second. Just this beautiful
https://karpathy.ai/lexicap/0009-large.html#00:32:30.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
philosophical notion. So Pamela McCordick in 1979 wrote, AI began with the ancient wish to
https://karpathy.ai/lexicap/0009-large.html#00:32:38.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
forge the gods. So when you think about the history of our civilization, do you think
https://karpathy.ai/lexicap/0009-large.html#00:32:47.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that there is an inherent desire to create, let's not say gods, but to create superintelligence?
https://karpathy.ai/lexicap/0009-large.html#00:32:53.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Is it inherent to us? Is it in our genes? That the natural arc of human civilization is to create
https://karpathy.ai/lexicap/0009-large.html#00:33:01.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
things that are of greater and greater power and perhaps echoes of ourselves. So to create the gods
https://karpathy.ai/lexicap/0009-large.html#00:33:11.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
as Pamela said. Maybe. I mean, we're all individuals, but certainly we see over and over
https://karpathy.ai/lexicap/0009-large.html#00:33:19.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
again in history, individuals who thought about this possibility. Hopefully when I'm not being too
https://karpathy.ai/lexicap/0009-large.html#00:33:32.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
philosophical here, but if you look at the arc of this, where this is going and we'll talk about AI
https://karpathy.ai/lexicap/0009-large.html#00:33:40.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
safety, we'll talk about greater and greater intelligence. Do you see that there in, when you
https://karpathy.ai/lexicap/0009-large.html#00:33:47.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
created the Othello program and you felt this excitement, what was that excitement? Was it
https://karpathy.ai/lexicap/0009-large.html#00:33:54.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
excitement of a tinkerer who created something cool like a clock? Or was there a magic or was
https://karpathy.ai/lexicap/0009-large.html#00:33:59.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
it more like a child being born? Yeah. So I mean, I certainly understand that viewpoint. And if you
https://karpathy.ai/lexicap/0009-large.html#00:34:07.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
look at the Lighthill report, which was, so in the 70s, there was a lot of controversy in the UK
https://karpathy.ai/lexicap/0009-large.html#00:34:14.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
about AI and whether it was for real and how much money the government should invest. And
https://karpathy.ai/lexicap/0009-large.html#00:34:23.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
there was a long story, but the government commissioned a report by Lighthill, who was a
https://karpathy.ai/lexicap/0009-large.html#00:34:32.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
physicist, and he wrote a very damning report about AI, which I think was the point. And he
https://karpathy.ai/lexicap/0009-large.html#00:34:39.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
said that these are frustrated men who are unable to have children would like to create and create
https://karpathy.ai/lexicap/0009-large.html#00:34:48.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
a life as a kind of replacement, which I think is really pretty unfair. But there is a kind of magic,
https://karpathy.ai/lexicap/0009-large.html#00:34:59.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I would say, when you build something and what you're building in is really just, you're building
https://karpathy.ai/lexicap/0009-large.html#00:35:17.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in some understanding of the principles of learning and decision making. And to see those
https://karpathy.ai/lexicap/0009-large.html#00:35:28.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
principles actually then turn into intelligent behavior in specific situations, it's an
https://karpathy.ai/lexicap/0009-large.html#00:35:35.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
incredible thing. And that is naturally going to make you think, okay, where does this end?
https://karpathy.ai/lexicap/0009-large.html#00:35:45.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And so there's magical optimistic views of where it ends, whatever your view of optimism is,
https://karpathy.ai/lexicap/0009-large.html#00:36:00.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
whatever your view of utopia is, it's probably different for everybody. But you've often talked
https://karpathy.ai/lexicap/0009-large.html#00:36:08.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
about concerns you have of how things may go wrong. So I've talked to Max Tegmark. There's a
https://karpathy.ai/lexicap/0009-large.html#00:36:13.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
lot of interesting ways to think about AI safety. You're one of the seminal people thinking about
https://karpathy.ai/lexicap/0009-large.html#00:36:26.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
this problem amongst sort of being in the weeds of actually solving specific AI problems. You're
https://karpathy.ai/lexicap/0009-large.html#00:36:33.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
also thinking about the big picture of where are we going? So can you talk about several elements
https://karpathy.ai/lexicap/0009-large.html#00:36:39.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
of it? Let's just talk about maybe the control problem. So this idea of losing ability to control
https://karpathy.ai/lexicap/0009-large.html#00:36:44.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the behavior in our AI system. So how do you see that? How do you see that coming about?
https://karpathy.ai/lexicap/0009-large.html#00:36:52.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
What do you think we can do to manage it?
https://karpathy.ai/lexicap/0009-large.html#00:37:00.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Well, so it doesn't take a genius to realize that if you make something that's smarter than you,
https://karpathy.ai/lexicap/0009-large.html#00:37:04.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you might have a problem. Alan Turing wrote about this and gave lectures about this in 1951.
https://karpathy.ai/lexicap/0009-large.html#00:37:09.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
He did a lecture on the radio and he basically says, once the machine thinking method starts,
https://karpathy.ai/lexicap/0009-large.html#00:37:22.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
very quickly they'll outstrip humanity. And if we're lucky, we might be able to turn off the power
https://karpathy.ai/lexicap/0009-large.html#00:37:31.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
at strategic moments, but even so, our species would be humbled. Actually, he was wrong about
https://karpathy.ai/lexicap/0009-large.html#00:37:42.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that. If it's sufficiently intelligent machine, it's not going to let you switch it off. It's
https://karpathy.ai/lexicap/0009-large.html#00:37:49.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
actually in competition with you. So what do you think is most likely going to happen?
https://karpathy.ai/lexicap/0009-large.html#00:37:55.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
What do you think is meant just for a quick tangent, if we shut off this super intelligent
https://karpathy.ai/lexicap/0009-large.html#00:37:59.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
machine that our species will be humbled? I think he means that we would realize that
https://karpathy.ai/lexicap/0009-large.html#00:38:06.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we are inferior, right? That we only survive by the skin of our teeth because we happen to get
https://karpathy.ai/lexicap/0009-large.html#00:38:16.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to the off switch just in time. And if we hadn't, then we would have lost control over the earth.
https://karpathy.ai/lexicap/0009-large.html#00:38:22.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Are you more worried when you think about this stuff about super intelligent AI,
https://karpathy.ai/lexicap/0009-large.html#00:38:32.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
or are you more worried about super powerful AI that's not aligned with our values? So the
https://karpathy.ai/lexicap/0009-large.html#00:38:36.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
paperclip scenarios kind of... So the main problem I'm working on is the control problem, the problem
https://karpathy.ai/lexicap/0009-large.html#00:38:43.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
of machines pursuing objectives that are, as you say, not aligned with human objectives. And
https://karpathy.ai/lexicap/0009-large.html#00:38:54.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
this has been the way we've thought about AI since the beginning.
https://karpathy.ai/lexicap/0009-large.html#00:39:02.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
You build a machine for optimizing, and then you put in some objective, and it optimizes, right?
https://karpathy.ai/lexicap/0009-large.html#00:39:07.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And we can think of this as the King Midas problem, right? Because if the King Midas put
https://karpathy.ai/lexicap/0009-large.html#00:39:14.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in this objective, everything I touch should turn to gold. And the gods, that's like the machine,
https://karpathy.ai/lexicap/0009-large.html#00:39:23.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
they said, okay, done. You now have this power. And of course, his father,
https://karpathy.ai/lexicap/0009-large.html#00:39:30.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
his drink, and his family all turned to gold. And then he dies of misery and starvation. And
https://karpathy.ai/lexicap/0009-large.html#00:39:35.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
it's a warning, it's a failure mode that pretty much every culture in history has had some story
https://karpathy.ai/lexicap/0009-large.html#00:39:47.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
along the same lines. There's the genie that gives you three wishes, and the third wish is always,
https://karpathy.ai/lexicap/0009-large.html#00:39:54.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you know, please undo the first two wishes because I messed up. And when Arthur Samuel wrote his
https://karpathy.ai/lexicap/0009-large.html#00:39:59.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
checker playing program, which learned to play checkers considerably better than
https://karpathy.ai/lexicap/0009-large.html#00:40:09.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Arthur Samuel could play, and actually reached a pretty decent standard.
https://karpathy.ai/lexicap/0009-large.html#00:40:13.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Norbert Wiener, who was one of the major mathematicians of the 20th century,
https://karpathy.ai/lexicap/0009-large.html#00:40:20.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
he's sort of the father of modern automation control systems. He saw this and he basically
https://karpathy.ai/lexicap/0009-large.html#00:40:24.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
extrapolated, as Turing did, and said, okay, this is how we could lose control.
https://karpathy.ai/lexicap/0009-large.html#00:40:31.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And specifically, that we have to be certain that the purpose we put into the machine is the
https://karpathy.ai/lexicap/0009-large.html#00:40:39.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
purpose which we really desire. And the problem is, we can't do that.
https://karpathy.ai/lexicap/0009-large.html#00:40:49.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
You mean we're not, it's a very difficult to encode,
https://karpathy.ai/lexicap/0009-large.html#00:40:57.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to put our values on paper is really difficult, or you're just saying it's impossible?
https://karpathy.ai/lexicap/0009-large.html#00:41:00.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
So theoretically, it's possible, but in practice, it's extremely unlikely that we could
https://karpathy.ai/lexicap/0009-large.html#00:41:10.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
specify correctly in advance, the full range of concerns of humanity.
https://karpathy.ai/lexicap/0009-large.html#00:41:17.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
You talked about cultural transmission of values,
https://karpathy.ai/lexicap/0009-large.html#00:41:24.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I think is how humans to human transmission of values happens, right?
https://karpathy.ai/lexicap/0009-large.html#00:41:27.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Well, we learn, yeah, I mean, as we grow up, we learn about the values that matter,
https://karpathy.ai/lexicap/0009-large.html#00:41:31.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
how things should go, what is reasonable to pursue and what isn't reasonable to pursue.
https://karpathy.ai/lexicap/0009-large.html#00:41:37.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
You think machines can learn in the same kind of way?
https://karpathy.ai/lexicap/0009-large.html#00:41:43.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah, so I think that what we need to do is to get away from this idea that
https://karpathy.ai/lexicap/0009-large.html#00:41:46.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you build an optimising machine, and then you put the objective into it.
https://karpathy.ai/lexicap/0009-large.html#00:41:52.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Because if it's possible that you might put in a wrong objective, and we already know this is
https://karpathy.ai/lexicap/0009-large.html#00:41:56.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
possible because it's happened lots of times, right? That means that the machine should never
https://karpathy.ai/lexicap/0009-large.html#00:42:03.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
take an objective that's given as gospel truth. Because once it takes the objective as gospel
https://karpathy.ai/lexicap/0009-large.html#00:42:09.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
truth, then it believes that whatever actions it's taking in pursuit of that objective are
https://karpathy.ai/lexicap/0009-large.html#00:42:18.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the correct things to do. So you could be jumping up and down and saying, no, no, no,
https://karpathy.ai/lexicap/0009-large.html#00:42:26.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
no, you're going to destroy the world, but the machine knows what the true objective is and is
https://karpathy.ai/lexicap/0009-large.html#00:42:30.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
pursuing it, and tough luck to you. And this is not restricted to AI, right? This is, I think,
https://karpathy.ai/lexicap/0009-large.html#00:42:35.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
many of the 20th century technologies, right? So in statistics, you minimise a loss function,
https://karpathy.ai/lexicap/0009-large.html#00:42:42.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the loss function is exogenously specified. In control theory, you minimise a cost function.
https://karpathy.ai/lexicap/0009-large.html#00:42:48.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
In operations research, you maximise a reward function, and so on. So in all these disciplines,
https://karpathy.ai/lexicap/0009-large.html#00:42:53.440