episode
stringlengths
45
100
text
stringlengths
1
528
timestamp_link
stringlengths
56
56
Eric Schmidt: Google | Lex Fridman Podcast #8
and you have to basically work both to educate others
https://karpathy.ai/lexicap/0008-large.html#00:32:09.020
Eric Schmidt: Google | Lex Fridman Podcast #8
and give them that opportunity,
https://karpathy.ai/lexicap/0008-large.html#00:32:12.180
Eric Schmidt: Google | Lex Fridman Podcast #8
but also use that wealth to advance human society.
https://karpathy.ai/lexicap/0008-large.html#00:32:13.580
Eric Schmidt: Google | Lex Fridman Podcast #8
In my case, I'm particularly interested in
https://karpathy.ai/lexicap/0008-large.html#00:32:16.700
Eric Schmidt: Google | Lex Fridman Podcast #8
using the tools of artificial intelligence
https://karpathy.ai/lexicap/0008-large.html#00:32:18.540
Eric Schmidt: Google | Lex Fridman Podcast #8
and machine learning to make society better.
https://karpathy.ai/lexicap/0008-large.html#00:32:20.580
Eric Schmidt: Google | Lex Fridman Podcast #8
I've mentioned education, I've mentioned inequality
https://karpathy.ai/lexicap/0008-large.html#00:32:22.860
Eric Schmidt: Google | Lex Fridman Podcast #8
and middle class and things like this,
https://karpathy.ai/lexicap/0008-large.html#00:32:26.020
Eric Schmidt: Google | Lex Fridman Podcast #8
all of which are a passion of mine.
https://karpathy.ai/lexicap/0008-large.html#00:32:28.060
Eric Schmidt: Google | Lex Fridman Podcast #8
It doesn't matter what you do,
https://karpathy.ai/lexicap/0008-large.html#00:32:30.100
Eric Schmidt: Google | Lex Fridman Podcast #8
it matters that you believe in it,
https://karpathy.ai/lexicap/0008-large.html#00:32:31.860
Eric Schmidt: Google | Lex Fridman Podcast #8
that it's important to you,
https://karpathy.ai/lexicap/0008-large.html#00:32:33.700
Eric Schmidt: Google | Lex Fridman Podcast #8
and that your life will be far more satisfying
https://karpathy.ai/lexicap/0008-large.html#00:32:35.380
Eric Schmidt: Google | Lex Fridman Podcast #8
if you spend your life doing that.
https://karpathy.ai/lexicap/0008-large.html#00:32:38.100
Eric Schmidt: Google | Lex Fridman Podcast #8
I think there's no better place to end
https://karpathy.ai/lexicap/0008-large.html#00:32:40.540
Eric Schmidt: Google | Lex Fridman Podcast #8
than a discussion of the meaning of life.
https://karpathy.ai/lexicap/0008-large.html#00:32:43.460
Eric Schmidt: Google | Lex Fridman Podcast #8
Eric, thank you so much.
https://karpathy.ai/lexicap/0008-large.html#00:32:45.220
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
The following is a conversation with Stuart Russell. He's a professor of computer science at
https://karpathy.ai/lexicap/0009-large.html#00:00:00.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
UC Berkeley and a coauthor of a book that introduced me and millions of other people
https://karpathy.ai/lexicap/0009-large.html#00:00:04.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to the amazing world of AI called Artificial Intelligence, A Modern Approach. So it was an
https://karpathy.ai/lexicap/0009-large.html#00:00:10.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
honor for me to have this conversation as part of MIT course in artificial general intelligence
https://karpathy.ai/lexicap/0009-large.html#00:00:16.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and the artificial intelligence podcast. If you enjoy it, please subscribe on YouTube,
https://karpathy.ai/lexicap/0009-large.html#00:00:23.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
iTunes or your podcast provider of choice, or simply connect with me on Twitter at Lex Friedman
https://karpathy.ai/lexicap/0009-large.html#00:00:28.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
spelled F R I D. And now here's my conversation with Stuart Russell.
https://karpathy.ai/lexicap/0009-large.html#00:00:34.320
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
So you've mentioned in 1975 in high school, you've created one of your first AI programs
https://karpathy.ai/lexicap/0009-large.html#00:00:41.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that play chess. Were you ever able to build a program that beat you at chess or another board
https://karpathy.ai/lexicap/0009-large.html#00:00:47.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
game? So my program never beat me at chess. I actually wrote the program at Imperial College.
https://karpathy.ai/lexicap/0009-large.html#00:00:57.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
So I used to take the bus every Wednesday with a box of cards this big and shove them into the
https://karpathy.ai/lexicap/0009-large.html#00:01:06.880
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
card reader. And they gave us eight seconds of CPU time. It took about five seconds to read the cards
https://karpathy.ai/lexicap/0009-large.html#00:01:14.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in and compile the code. So we had three seconds of CPU time, which was enough to make one move,
https://karpathy.ai/lexicap/0009-large.html#00:01:21.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you know, with a not very deep search. And then we would print that move out and then
https://karpathy.ai/lexicap/0009-large.html#00:01:28.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
we'd have to go to the back of the queue and wait to feed the cards in again.
https://karpathy.ai/lexicap/0009-large.html#00:01:32.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
How deep was the search? Are we talking about one move, two moves, three moves?
https://karpathy.ai/lexicap/0009-large.html#00:01:35.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
No, I think we got an eight move, a depth eight with alpha beta. And we had some tricks of our
https://karpathy.ai/lexicap/0009-large.html#00:01:39.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
own about move ordering and some pruning of the tree. But you were still able to beat that program?
https://karpathy.ai/lexicap/0009-large.html#00:01:48.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah, yeah. I was a reasonable chess player in my youth. I did an Othello program and a
https://karpathy.ai/lexicap/0009-large.html#00:01:55.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
backgammon program. So when I got to Berkeley, I worked a lot on what we call meta reasoning,
https://karpathy.ai/lexicap/0009-large.html#00:02:01.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
which really means reasoning about reasoning. And in the case of a game playing program,
https://karpathy.ai/lexicap/0009-large.html#00:02:08.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you need to reason about what parts of the search tree you're actually going to explore because the
https://karpathy.ai/lexicap/0009-large.html#00:02:14.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
search tree is enormous, bigger than the number of atoms in the universe. And the way programs
https://karpathy.ai/lexicap/0009-large.html#00:02:19.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
succeed and the way humans succeed is by only looking at a small fraction of the search tree.
https://karpathy.ai/lexicap/0009-large.html#00:02:27.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And if you look at the right fraction, you play really well. If you look at the wrong fraction,
https://karpathy.ai/lexicap/0009-large.html#00:02:33.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
if you waste your time thinking about things that are never going to happen,
https://karpathy.ai/lexicap/0009-large.html#00:02:37.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
moves that no one's ever going to make, then you're going to lose because you won't be able
https://karpathy.ai/lexicap/0009-large.html#00:02:41.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to figure out the right decision. So that question of how machines can manage their own computation,
https://karpathy.ai/lexicap/0009-large.html#00:02:46.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
how they decide what to think about, is the meta reasoning question. And we developed some methods
https://karpathy.ai/lexicap/0009-large.html#00:02:53.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
for doing that. And very simply, the machine should think about whatever thoughts are going
https://karpathy.ai/lexicap/0009-large.html#00:03:00.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
to improve its decision quality. We were able to show that both for Othello, which is a standard
https://karpathy.ai/lexicap/0009-large.html#00:03:07.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
two player game, and for Backgammon, which includes dice rolls, so it's a two player game
https://karpathy.ai/lexicap/0009-large.html#00:03:13.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
with uncertainty. For both of those cases, we could come up with algorithms that were actually
https://karpathy.ai/lexicap/0009-large.html#00:03:19.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
much more efficient than the standard alpha beta search, which chess programs at the time were
https://karpathy.ai/lexicap/0009-large.html#00:03:25.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
using. And that those programs could beat me. And I think you can see the same basic ideas in Alpha
https://karpathy.ai/lexicap/0009-large.html#00:03:31.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Go and Alpha Zero today. The way they explore the tree is using a form of meta reasoning to select
https://karpathy.ai/lexicap/0009-large.html#00:03:42.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
what to think about based on how useful it is to think about it. Is there any insights you can
https://karpathy.ai/lexicap/0009-large.html#00:03:51.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
describe with our Greek symbols of how do we select which paths to go down? There's really
https://karpathy.ai/lexicap/0009-large.html#00:03:57.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
two kinds of learning going on. So as you say, Alpha Go learns to evaluate board positions. So
https://karpathy.ai/lexicap/0009-large.html#00:04:04.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
it can look at a go board. And it actually has probably a superhuman ability to instantly tell
https://karpathy.ai/lexicap/0009-large.html#00:04:11.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
how promising that situation is. To me, the amazing thing about Alpha Go is not that it can
https://karpathy.ai/lexicap/0009-large.html#00:04:19.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
be the world champion with its hands tied behind his back, but the fact that if you stop it from
https://karpathy.ai/lexicap/0009-large.html#00:04:28.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
searching altogether, so you say, okay, you're not allowed to do any thinking ahead. You can just
https://karpathy.ai/lexicap/0009-large.html#00:04:36.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
consider each of your legal moves and then look at the resulting situation and evaluate it. So
https://karpathy.ai/lexicap/0009-large.html#00:04:42.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
what we call a depth one search. So just the immediate outcome of your moves and decide if
https://karpathy.ai/lexicap/0009-large.html#00:04:48.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that's good or bad. That version of Alpha Go can still play at a professional level.
https://karpathy.ai/lexicap/0009-large.html#00:04:53.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And human professionals are sitting there for five, 10 minutes deciding what to do and Alpha Go
https://karpathy.ai/lexicap/0009-large.html#00:05:02.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in less than a second can instantly intuit what is the right move to make based on its ability to
https://karpathy.ai/lexicap/0009-large.html#00:05:06.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
evaluate positions. And that is remarkable because we don't have that level of intuition about Go.
https://karpathy.ai/lexicap/0009-large.html#00:05:14.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
We actually have to think about the situation. So anyway, that capability that Alpha Go has is one
https://karpathy.ai/lexicap/0009-large.html#00:05:23.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
big part of why it beats humans. The other big part is that it's able to look ahead 40, 50, 60 moves
https://karpathy.ai/lexicap/0009-large.html#00:05:31.680
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
into the future. And if it was considering all possibilities, 40 or 50 or 60 moves into the
https://karpathy.ai/lexicap/0009-large.html#00:05:41.520
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
future, that would be 10 to the 200 possibilities. So way more than atoms in the universe and so on.
https://karpathy.ai/lexicap/0009-large.html#00:05:49.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
So it's very, very selective about what it looks at. So let me try to give you an intuition about
https://karpathy.ai/lexicap/0009-large.html#00:06:01.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
how you decide what to think about. It's a combination of two things. One is how promising
https://karpathy.ai/lexicap/0009-large.html#00:06:08.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
it is. So if you're already convinced that a move is terrible, there's no point spending a lot more
https://karpathy.ai/lexicap/0009-large.html#00:06:14.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
time convincing yourself that it's terrible because it's probably not going to change your mind. So
https://karpathy.ai/lexicap/0009-large.html#00:06:22.560
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the real reason you think is because there's some possibility of changing your mind about what to do.
https://karpathy.ai/lexicap/0009-large.html#00:06:28.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And it's that changing your mind that would result then in a better final action in the real world.
https://karpathy.ai/lexicap/0009-large.html#00:06:34.400
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
So that's the purpose of thinking is to improve the final action in the real world. So if you
https://karpathy.ai/lexicap/0009-large.html#00:06:40.960
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
think about a move that is guaranteed to be terrible, you can convince yourself it's terrible,
https://karpathy.ai/lexicap/0009-large.html#00:06:47.920
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
you're still not going to change your mind. But on the other hand, suppose you had a choice between
https://karpathy.ai/lexicap/0009-large.html#00:06:53.440
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
two moves. One of them you've already figured out is guaranteed to be a draw, let's say. And then
https://karpathy.ai/lexicap/0009-large.html#00:06:59.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the other one looks a little bit worse. It looks fairly likely that if you make that move, you're
https://karpathy.ai/lexicap/0009-large.html#00:07:05.040
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
going to lose. But there's still some uncertainty about the value of that move. There's still some
https://karpathy.ai/lexicap/0009-large.html#00:07:10.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
possibility that it will turn out to be a win. Then it's worth thinking about that. So even though
https://karpathy.ai/lexicap/0009-large.html#00:07:16.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
it's less promising on average than the other move, which is a good move, it's worth thinking
https://karpathy.ai/lexicap/0009-large.html#00:07:22.080
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
about on average than the other move, which is guaranteed to be a draw. There's still some
https://karpathy.ai/lexicap/0009-large.html#00:07:27.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
purpose in thinking about it because there's a chance that you will change your mind and discover
https://karpathy.ai/lexicap/0009-large.html#00:07:32.160
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
that in fact it's a better move. So it's a combination of how good the move appears to be
https://karpathy.ai/lexicap/0009-large.html#00:07:36.800
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
and how much uncertainty there is about its value. The more uncertainty, the more it's worth thinking
https://karpathy.ai/lexicap/0009-large.html#00:07:42.720
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
about because there's a higher upside if you want to think of it that way.
https://karpathy.ai/lexicap/0009-large.html#00:07:48.640
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
And of course in the beginning, especially in the AlphaGo Zero formulation, everything is shrouded
https://karpathy.ai/lexicap/0009-large.html#00:07:52.240
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
in uncertainty. So you're really swimming in a sea of uncertainty. So it benefits you to,
https://karpathy.ai/lexicap/0009-large.html#00:07:59.760
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I mean, actually following the same process as you described, but because you're so uncertain
https://karpathy.ai/lexicap/0009-large.html#00:08:07.600
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
about everything, you basically have to try a lot of different directions.
https://karpathy.ai/lexicap/0009-large.html#00:08:11.120
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
Yeah. So the early parts of the search tree are fairly bushy that it will look at a lot
https://karpathy.ai/lexicap/0009-large.html#00:08:15.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
of different possibilities, but fairly quickly, the degree of certainty about some of the moves,
https://karpathy.ai/lexicap/0009-large.html#00:08:22.480
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
I mean, if a move is really terrible, you'll pretty quickly find out, right? You lose half
https://karpathy.ai/lexicap/0009-large.html#00:08:27.840
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
your pieces or half your territory and then you'll say, okay, this is not worth thinking
https://karpathy.ai/lexicap/0009-large.html#00:08:32.000
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
about anymore. And then so further down the tree becomes very long and narrow and you're following
https://karpathy.ai/lexicap/0009-large.html#00:08:37.280
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
various lines of play, 10, 20, 30, 40, 50 moves into the future. And that again is something that
https://karpathy.ai/lexicap/0009-large.html#00:08:45.360
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
human beings have a very hard time doing mainly because they just lack the short term memory.
https://karpathy.ai/lexicap/0009-large.html#00:08:55.280