episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to match people's preferences, right? Which sounds like a good idea.
https://karpathy.ai/lexicap/0009-large.html#00:54:41.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
But in fact, that isn't how the algorithm works, right? You make more money,
https://karpathy.ai/lexicap/0009-large.html#00:54:47.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the algorithm makes more money if it can better predict what people are going to click on,
https://karpathy.ai/lexicap/0009-large.html#00:54:54.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
because then it can feed them exactly that, right? So the way to maximize click through
https://karpathy.ai/lexicap/0009-large.html#00:55:01.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
is actually to modify the people to make them more predictable. And one way to do that is to
https://karpathy.ai/lexicap/0009-large.html#00:55:07.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
feed them information, which will change their behavior and preferences towards extremes that
https://karpathy.ai/lexicap/0009-large.html#00:55:16.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
make them predictable. Whatever is the nearest extreme or the nearest predictable point,
https://karpathy.ai/lexicap/0009-large.html#00:55:23.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that's where you're going to end up. And the machines will force you there.
https://karpathy.ai/lexicap/0009-large.html#00:55:29.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And I think there's a reasonable argument to say that this, among other things,
https://karpathy.ai/lexicap/0009-large.html#00:55:35.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
is contributing to the destruction of democracy in the world.
https://karpathy.ai/lexicap/0009-large.html#00:55:40.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And where was the oversight of this process? Where were the people saying, okay,
https://karpathy.ai/lexicap/0009-large.html#00:55:47.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you would like to apply this algorithm to 5 billion people on the face of the earth.
https://karpathy.ai/lexicap/0009-large.html#00:55:52.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Can you show me that it's safe? Can you show me that it won't have various kinds of negative
https://karpathy.ai/lexicap/0009-large.html#00:55:58.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
effects? No, there was no one asking that question. There was no one placed between
https://karpathy.ai/lexicap/0009-large.html#00:56:03.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the undergrads with too much caffeine and the human race. They just did it.
https://karpathy.ai/lexicap/0009-large.html#00:56:11.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
But some way outside the scope of my knowledge, so economists would argue that the, what is it,
https://karpathy.ai/lexicap/0009-large.html#00:56:16.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the invisible hand, so the capitalist system, it was the oversight. So if you're going to corrupt
https://karpathy.ai/lexicap/0009-large.html#00:56:22.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
society with whatever decision you make as a company, then that's going to be reflected in
https://karpathy.ai/lexicap/0009-large.html#00:56:29.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
people not using your product. That's one model of oversight.
https://karpathy.ai/lexicap/0009-large.html#00:56:33.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
We shall see, but in the meantime, but you might even have broken the political system
https://karpathy.ai/lexicap/0009-large.html#00:56:38.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that enables capitalism to function. Well, you've changed it.
https://karpathy.ai/lexicap/0009-large.html#00:56:48.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
We shall see.
https://karpathy.ai/lexicap/0009-large.html#00:56:53.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Change is often painful. So my question is absolutely, it's fascinating. You're absolutely
https://karpathy.ai/lexicap/0009-large.html#00:56:54.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
right that there was zero oversight on algorithms that can have a profound civilization changing
https://karpathy.ai/lexicap/0009-large.html#00:57:01.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
effect. So do you think it's possible? I mean, I haven't, have you seen government? So do you
https://karpathy.ai/lexicap/0009-large.html#00:57:09.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
think it's possible to create regulatory bodies oversight over AI algorithms, which are inherently
https://karpathy.ai/lexicap/0009-large.html#00:57:15.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
such cutting edge set of ideas and technologies?
https://karpathy.ai/lexicap/0009-large.html#00:57:24.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah, but I think it takes time to figure out what kind of oversight, what kinds of controls.
https://karpathy.ai/lexicap/0009-large.html#00:57:28.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I mean, it took time to design the FDA regime, you know, and some people still don't like it and
https://karpathy.ai/lexicap/0009-large.html#00:57:35.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
they want to fix it. And I think there are clear ways that it could be improved.
https://karpathy.ai/lexicap/0009-large.html#00:57:40.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
But the whole notion that you have stage one, stage two, stage three, and here are the criteria
https://karpathy.ai/lexicap/0009-large.html#00:57:46.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
for what you have to do to pass a stage one trial, right? We haven't even thought about what those
https://karpathy.ai/lexicap/0009-large.html#00:57:51.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
would be for algorithms. So, I mean, I think there are things we could do right now with regard to
https://karpathy.ai/lexicap/0009-large.html#00:57:58.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
bias, for example, we have a pretty good technical handle on how to detect algorithms that are
https://karpathy.ai/lexicap/0009-large.html#00:58:07.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
propagating bias that exists in data sets, how to de bias those algorithms, and even what it's going
https://karpathy.ai/lexicap/0009-large.html#00:58:15.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to cost you to do that. So I think we could start having some standards on that. I think there are
https://karpathy.ai/lexicap/0009-large.html#00:58:22.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
things to do with impersonation and falsification that we could work on.
https://karpathy.ai/lexicap/0009-large.html#00:58:30.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Fakes, yeah.
https://karpathy.ai/lexicap/0009-large.html#00:58:37.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
A very simple point. So impersonation is a machine acting as if it was a person.
https://karpathy.ai/lexicap/0009-large.html#00:58:38.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I can't see a real justification for why we shouldn't insist that machines self identify
https://karpathy.ai/lexicap/0009-large.html#00:58:46.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
as machines. Where is the social benefit in fooling people into thinking that this is really
https://karpathy.ai/lexicap/0009-large.html#00:58:53.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
a person when it isn't? I don't mind if it uses a human like voice, that's easy to understand,
https://karpathy.ai/lexicap/0009-large.html#00:59:02.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that's fine, but it should just say, I'm a machine in some form.
https://karpathy.ai/lexicap/0009-large.html#00:59:09.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And how many people are speaking to that? I would think relatively obvious facts.
https://karpathy.ai/lexicap/0009-large.html#00:59:14.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah, I mean, there is actually a law in California that bans impersonation, but only in certain
https://karpathy.ai/lexicap/0009-large.html#00:59:20.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
restricted circumstances. So for the purpose of engaging in a fraudulent transaction and for the
https://karpathy.ai/lexicap/0009-large.html#00:59:27.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
purpose of modifying someone's voting behavior. So those are the circumstances where machines have
https://karpathy.ai/lexicap/0009-large.html#00:59:36.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to self identify. But I think arguably, it should be in all circumstances. And
https://karpathy.ai/lexicap/0009-large.html#00:59:44.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
then when you talk about deep fakes, we're just at the beginning, but already it's possible to
https://karpathy.ai/lexicap/0009-large.html#00:59:51.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
make a movie of anybody saying anything in ways that are pretty hard to detect.
https://karpathy.ai/lexicap/0009-large.html#00:59:58.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Including yourself because you're on camera now and your voice is coming through with high
https://karpathy.ai/lexicap/0009-large.html#01:00:05.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
resolution.
https://karpathy.ai/lexicap/0009-large.html#01:00:09.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah, so you could take what I'm saying and replace it with pretty much anything else you
https://karpathy.ai/lexicap/0009-large.html#01:00:09.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
wanted me to be saying. And it's a very simple thing.
https://karpathy.ai/lexicap/0009-large.html#01:00:13.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Take what I'm saying and replace it with pretty much anything else you wanted me to be saying. And
https://karpathy.ai/lexicap/0009-large.html#01:00:17.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
even it would change my lips and facial expressions to fit. And there's actually not much
https://karpathy.ai/lexicap/0009-large.html#01:00:21.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in the way of real legal protection against that. I think in the commercial area, you could say,
https://karpathy.ai/lexicap/0009-large.html#01:00:30.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
yeah, you're using my brand and so on. There are rules about that. But in the political sphere,
https://karpathy.ai/lexicap/0009-large.html#01:00:38.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I think at the moment, anything goes. That could be really, really damaging.
https://karpathy.ai/lexicap/0009-large.html#01:00:45.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And let me just try to make not an argument, but try to look back at history and say something dark
https://karpathy.ai/lexicap/0009-large.html#01:00:53.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in essence is while regulation seems to be, oversight seems to be exactly the right thing to
https://karpathy.ai/lexicap/0009-large.html#01:01:04.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
do here. It seems that human beings, what they naturally do is they wait for something to go
https://karpathy.ai/lexicap/0009-large.html#01:01:10.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
wrong. If you're talking about nuclear weapons, you can't talk about nuclear weapons being dangerous
https://karpathy.ai/lexicap/0009-large.html#01:01:15.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
until somebody actually like the United States drops the bomb or Chernobyl melting. Do you think
https://karpathy.ai/lexicap/0009-large.html#01:01:21.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we will have to wait for things going wrong in a way that's obviously damaging to society,
https://karpathy.ai/lexicap/0009-large.html#01:01:28.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
not an existential risk, but obviously damaging? Or do you have faith that...
https://karpathy.ai/lexicap/0009-large.html#01:01:36.880
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I hope not, but I think we do have to look at history.
https://karpathy.ai/lexicap/0009-large.html#01:01:43.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And so the two examples you gave, nuclear weapons and nuclear power are very, very interesting
https://karpathy.ai/lexicap/0009-large.html#01:01:49.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
because nuclear weapons, we knew in the early years of the 20th century that atoms contained
https://karpathy.ai/lexicap/0009-large.html#01:01:57.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
a huge amount of energy. We had E equals MC squared. We knew the mass differences between
https://karpathy.ai/lexicap/0009-large.html#01:02:07.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the different atoms and their components. And we knew that
https://karpathy.ai/lexicap/0009-large.html#01:02:12.880
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you might be able to make an incredibly powerful explosive. So HG Wells wrote science fiction book,
https://karpathy.ai/lexicap/0009-large.html#01:02:17.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I think in 1912. Frederick Soddy, who was the guy who discovered isotopes, the Nobel prize winner,
https://karpathy.ai/lexicap/0009-large.html#01:02:23.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
he gave a speech in 1915 saying that one pound of this new explosive would be the equivalent
https://karpathy.ai/lexicap/0009-large.html#01:02:31.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
of 150 tons of dynamite, which turns out to be about right. And this was in World War I,
https://karpathy.ai/lexicap/0009-large.html#01:02:40.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
so he was imagining how much worse the world war would be if we were using that kind of explosive.
https://karpathy.ai/lexicap/0009-large.html#01:02:48.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
But the physics establishment simply refused to believe that these things could be made.
https://karpathy.ai/lexicap/0009-large.html#01:02:56.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Including the people who are making it.
https://karpathy.ai/lexicap/0009-large.html#01:03:04.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Well, so they were doing the nuclear physics. I mean, eventually were the ones who made it.
https://karpathy.ai/lexicap/0009-large.html#01:03:05.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
You talk about Fermi or whoever.
https://karpathy.ai/lexicap/0009-large.html#01:03:11.200
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Well, so up to the development was mostly theoretical. So it was people using sort of
https://karpathy.ai/lexicap/0009-large.html#01:03:13.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
primitive kinds of particle acceleration and doing experiments at the level of single particles
https://karpathy.ai/lexicap/0009-large.html#01:03:22.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
or collections of particles. They weren't yet thinking about how to actually make a bomb or
https://karpathy.ai/lexicap/0009-large.html#01:03:29.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
anything like that. But they knew the energy was there and they figured if they understood it
https://karpathy.ai/lexicap/0009-large.html#01:03:37.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
better, it might be possible. But the physics establishment, their view, and I think because
https://karpathy.ai/lexicap/0009-large.html#01:03:40.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
they did not want it to be true, their view was that it could not be true. That this could not
https://karpathy.ai/lexicap/0009-large.html#01:03:47.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
not provide a way to make a super weapon. And there was this famous speech given by Rutherford,
https://karpathy.ai/lexicap/0009-large.html#01:03:54.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
who was the sort of leader of nuclear physics. And it was on September 11th, 1933. And he said,
https://karpathy.ai/lexicap/0009-large.html#01:04:03.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
anyone who talks about the possibility of obtaining energy from transformation of atoms
https://karpathy.ai/lexicap/0009-large.html#01:04:11.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
is talking complete moonshine. And the next morning, Leo Szilard read about that speech
https://karpathy.ai/lexicap/0009-large.html#01:04:17.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and then invented the nuclear chain reaction. And so as soon as he invented, as soon as he had that
https://karpathy.ai/lexicap/0009-large.html#01:04:26.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
idea that you could make a chain reaction with neutrons, because neutrons were not repelled by
https://karpathy.ai/lexicap/0009-large.html#01:04:32.880
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the nucleus, so they could enter the nucleus and then continue the reaction. As soon as he has that
https://karpathy.ai/lexicap/0009-large.html#01:04:38.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
idea, he instantly realized that the world was in deep doo doo. Because this is 1933, right? Hitler
https://karpathy.ai/lexicap/0009-large.html#01:04:44.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
had recently come to power in Germany. Szilard was in London and eventually became a refugee
https://karpathy.ai/lexicap/0009-large.html#01:04:54.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and came to the US. And in the process of having the idea about the chain reaction,
https://karpathy.ai/lexicap/0009-large.html#01:05:04.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
he figured out basically how to make a bomb and also how to make a reactor. And he patented the
https://karpathy.ai/lexicap/0009-large.html#01:05:11.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
reactor in 1934. But because of the situation, the great power conflict situation that he could see
https://karpathy.ai/lexicap/0009-large.html#01:05:18.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
happening, he kept that a secret. And so between then and the beginning of World War II, people
https://karpathy.ai/lexicap/0009-large.html#01:05:27.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
were working, including the Germans, on how to actually create neutron sources, what specific
https://karpathy.ai/lexicap/0009-large.html#01:05:39.920