video_id
stringclasses 7
values | text
stringlengths 2
29.3k
|
|---|---|
nHlE7EgJFds
|
OK, here is lecture
ten in linear algebra. Two important things
to do in this lecture. One is to correct an
error from lecture nine. So the blackboard with that
awful error is still with us. And the second,
the big thing to do is to tell you about
the four subspaces that come with a matrix. We've seen two subspaces,
the column space and the null space. There's two to go. First of all, and
this is a great way to OK. recap and correct
the previous lecture -- so you remember I
was just doing R^3. I couldn't have taken a
simpler example than R^3. And I wrote down
the standard basis. That's the standard basis. The basis -- the obvious basis
for the whole three dimensional space. And then I wanted
to make the point that there was nothing special,
nothing about that basis that another basis
couldn't have. It could have
linear independence, it could span a space. There's lots of other bases. So I started with these vectors,
one one two and two two five, and those were independent. And then I said
three three seven wouldn't do, because three
three seven is the sum of those. So in my innocence, I
put in three three eight. I figured probably if three
three seven is on the plane, is -- which I know, it's in
the plane with these two, then probably three three
eight sticks a little bit out of the plane and it's
independent and it gives a basis. But after class, to my
sorrow, a student tells me, "Wait a minute, that ba- that
third vector, three three eight, is not independent." And why did she say that? She didn't actually take
the time, didn't have to, to find w- w- what combination
of this one and this one gives three three eight. She did something else. In other words,
she looked ahead, because she said, wait a minute,
if I look at that matrix, it's not invertible. That third column can't be
independent of the first two, because when I look
at that matrix, it's got two identical rows. I have a square matrix. Its rows are
obviously dependent. And that makes the
columns dependent. So there's my error. When I look at the matrix A
that has those three columns, those three columns
can't be independent because that matrix
is not invertible because it's got two equal rows. And today's lecture will
reach the conclusion, the great conclusion,
that connects the column space with the row space. So those are -- the row space
is now going to be another one of my fundamental subspaces. The row space of this matrix,
or of this one -- well, the row space of this one is OK,
but the row space of this one, I'm looking at the rows of
the matrix -- oh, anyway, I'll have two equal rows and
the row space will be only two dimensional. The rank of the matrix with
these columns will only be two. So only two of those columns,
columns can be independent too. The rows tell me something about
the columns, in other words, something that I should
have noticed and I didn't. OK. So now let me pin down these
four fundamental subspaces. So here are the four
fundamental subspaces. This is really the heart of
this approach to linear algebra, to see these four subspaces,
how they're related. So what are they? The column space, C of A. The null space, N of A. And now comes the row
space, something new. The row space, what's in that? It's all combinations
of the rows. That's natural. We want a space, so we have
to take all combinations, and we start with the rows. So the rows span the row space. Are the rows a basis
for the row space? Maybe so, maybe no. The rows are a basis for the row
space when they're independent, but if they're dependent,
as in this example, my error from last
time, they're not -- those three rows
are not a basis. The row space wouldn't --
would only be two dimensional. I only need two
rows for a basis. So the row space,
now what's in it? It's all combinations
of the rows of A. All combinations
of the rows of A. But I don't like working
with row vectors. All my vectors have
been column vectors. I'd like to stay
with column vectors. How can I get to column
vectors out of these rows? I transpose the matrix. So if that's OK with you,
I'm going to transpose the matrix. I'm, I'm going to
say all combinations of the columns of A transpose. And that allows me to use the
convenient notation, the column space of A transpose. Nothing, no mathematics
went on there. We just got some vectors that
were lying down to stand up. But it means that we
can use this column space of A transpose, that's
telling me in a nice matrix notation what the row space is. OK. And finally is
another null space. The fourth fundamental
space will be the null space of A transpose. The fourth guy is the
null space of A transpose. And of course my notation
is N of A transpose. That's the null
space of A transpose. Eh, we don't have a perfect
name for this space as a -- connecting with A, but our usual
name is the left null space, and I'll show you
why in a moment. So often I call this the -- just to write that word --
the left null space of A. So just the way we
have the row space of A and we switch it to the
column space of A transpose, so we have this space
of guys l- that I call the left null space
of A, but the good notation is it's the null
space of A transpose. OK. Those are four spaces. Where are those spaces? What, what big space are they
in for -- when A is m by n? In that case, the
null space of A, what's in the null space of A? Vectors with n components,
solutions to A x equals zero. So the null space
of A is in R^n. What's in the column space of A? Well, columns. How many components
dothose columns have? m. So this column space is in R^m. What about the column
space of A transpose, which are just a disguised
way of saying the rows of A? The rows of A, in this
three by six matrix, have six components,
n components. The column space is in R^n. And the null space
of A transpose, I see that this fourth space
is already getting second, you know, second class
citizen treatment and it doesn't deserve it. It's, it should be
there, it is there, and shouldn't be squeezed. The null space of A transpose -- well, if the null space of A
had vectors with n components, the null space of A
transpose will be in R^m. I want to draw a picture
of the four spaces. OK. OK. Here are the four spaces. OK, Let me put n dimensional
space over on this side. Then which were the
subspaces in R^n? The null space was
and the row space was. So here we have the -- can I
make that picture of the row space? And can I make this kind of
picture of the null space? That's just meant
to be a sketch, to remind you that they're in
this -- which you know, how -- what type of vectors are in it? Vectors with n components. Over here, inside, consisting
of vectors with m components, is the column space
and what I'm calling the null space of A transpose. Those are the ones
with m components. OK. To understand these spaces
is our, is our job now. Because by understanding
those spaces, we know everything about
this half of linear algebra. What do I mean by
understanding those spaces? I would like to know a
basis for those spaces. For each one of those
spaces, how would I create -- construct a basis? What systematic way
would produce a basis? And what's their dimension? OK. So for each of
the four spaces, I have to answer those questions. How do I produce a basis? And then -- which has
a somewhat long answer. And what's the dimension,
which is just a number, so it has a real short answer. Can I give you the
short answer first? I shouldn't do it,
but here it is. I can tell you the dimension
of the column space. Let me start with this guy. What's its dimension? I have an m by n matrix. The dimension of the
column space is the rank, r. We actually got to that at
the end of the last lecture, but only for an example. So I really have to say,
OK, what's going on there. I should produce
a basis and then I just look to see how many
vectors I needed in that basis, and the answer will be r. Actually, I'll do that,
before I get on to the others. What's a basis for
the columns space? We've done all the
work of row reduction, identifying the pivot
columns, the ones that have pivots, the ones
that end up with pivots. But now I -- the pivot columns
I'm interested in are columns of A, the original A. And those pivot columns,
there are r of them. The rank r counts those. Those are a basis. So if I answer this question
for the column space, the answer will be a
basis is the pivot columns and the dimension is the rank
r, and there are r pivot columns and everything great. OK. So that space we
pretty well understand. I probably have a little
going back to see that -- to prove that this
is a right answer, but you know it's
the right answer. Now let me look
at the row space. OK. Shall I tell you the
dimension of the row space? Yes. Before we do even an
example, let me tell you the dimension of the row space. Its dimension is also r. The row space and the column
space have the same dimension. That's a wonderful fact. The dimension of the column
space of A transpose -- that's the row space -- is r. That, that space
is r dimensional. Snd so is this one. OK. That's the sort of insight
that got used in this example. If those -- are the three
columns of a matrix -- let me make them the three
columns of a matrix by just erasing some brackets. OK, those are the three
columns of a matrix. The rank of that matrix,
if I look at the columns, it wasn't obvious to me anyway. But if I look at the
rows, now it's obvious. The row space of
that matrix obviously is two dimensional, because
I see a basis for the row space, this row and that row. And of course,
strictly speaking, I'm supposed to transpose
those guys, make them stand up. But the rank is two, and
therefore the column space is two dimensional by
this wonderful fact that the row space and column
space have the same dimension. And therefore there are only
two pivot columns, not three, and, those, the three
columns are dependent. OK. Now let me bury that error
and talk about the row space. Well, I'm going to give you the
dimensions of all the spaces. Because that's
such a nice answer. OK. So let me come back here. So we have this great
fact to establish, that the row space, its
dimension is also the rank. What about the null space? OK. What's a basis for
the null space? What's the dimension
of the null space? Let me, I'll put that answer
up here for the null space. Well, how have we
constructed the null space? We took the matrix A, we
did those row operations to get it into a form
U or, or even further. We got it into the
reduced form R. And then we read off
special solutions. Special solutions. And every special solution
came from a free variable. And those special solutions
are in the null space, and the great thing is
they're a basis for it. So for the null space, a basis
will be the special solutions. And there's one for every
free variable, right? For each free variable, we give
that variable the value one, the other free variables zero. We get the pivot variables,
we get a vector in the -- we get a special solution. So we get altogether
n-r of them, because that's the
number of free variables. If we have r -- this is the dimension is r, is
the number of pivot variables. This is the number
of free variables. So the beauty is that those
special solutions do form a basis and tell us immediately
that the dimension of the null space is n -- I better write this well,
because it's so nice -- n-r. And do you see the nice thing? That the two dimensions in
this n dimensional space, one subspace is r dimensional -- to be proved, that's
the row space. The other subspace
is n-r dimensional, that's the null space. And the two dimensions
like together give n. The sum of r and n-R is n. And that's just great. It's really copying the fact
that we have n variables, r of them are pivot variables
and n-r are free variables, and n altogether. OK. And now what's the dimension
of this poor misbegotten fourth subspace? It's got to be m-r. The dimension of this left null
space, left out practically, is m-r. Well, that's really just
saying that this -- again, the sum of that plus that
is m, and m is correct, it's the number of
columns in A transpose. A transpose is just
as good a matrix as A. It just happens to be n by m. It happens to have m columns,
so it will have m variables when I go to A x
equals 0 and m of them, and r of them will be pivot
variables and m-r will be free variables. A transpose is as
good a matrix as A. It follows the same rule that
the this plus the dimension -- this dimension plus this
dimension adds up to the number of columns. And over here, A
transpose has m columns. OK. OK. So I gave you the easy
answer, the dimensions. Now can I go back
to check on a basis? We would like to think
that -- say the row space, because we've got a basis
for the column space. The pivot columns give a
basis for the column space. Now I'm asking you to
look at the row space. And I -- you could say, OK, I
can produce a basis for the row space by transposing my
matrix, making those columns, then doing elimination,
row reduction, and checking out the pivot
columns in this transposed matrix. But that means you
had to do all that row reduction on A transpose. It ought to be possible,
if we take a matrix A -- let me take the matrix -- maybe
we had this matrix in the last lecture. 1 1 1, 2 1 2, 3 2 3, 1 1 1. OK. That, that matrix was so easy. We spotted its pivot columns,
one and two, without actually doing row reduction. But now let's do
the job properly. So I subtract this away
from this to produce a zero. So one 2 3 1 is fine. Subtracting that away leaves
me minus 1 -1 0, right? And subtracting that from the
last row, oh, well that's easy. OK? I'm doing row reduction. Now I've -- the first
column is all set. The second column I
now see the pivot. And I can clean up, if I -- actually, OK. Why don't I make
the pivot into a 1. I'll multiply that row through
by by -1, and then I have 1 1. That was an elementary
operation I'm allowed, multiply a row by a number. And now I'll do elimination. Two of those away from that
will knock this guy out and make this into a 1. So that's now a 0 and that's a OK. Done. That's R. I'm seeing the
identity matrix here. I'm seeing zeros below. And I'm seeing F there. OK. What about its row space? What happened to its row space
-- well, what happened -- let me first ask, just
because this is, is -- sometimes something does happen. Its column space changed. The column space of R is not
the column space of A, right? Because 1 1 1 is certainly
in the column space of A and certainly not in
the column space of R. I did row operations. Those row operations
preserve the row space. So the row, so the column
spaces are different. Different column spaces,
different column spaces. But I believe that they
have the same row space. Same row space. I believe that the row space of
that matrix and the row space of this matrix are identical. They have exactly the
same vectors in them. Those vectors are vectors
with four components, right? They're all combinations
of those rows. Or I believe you
get the same thing by taking all combinations
of these rows. And if true, what's a basis? What's a basis for
the row space of R, and it'll be a basis for the
row space of the original A, but it's obviously a basis
for the row space of R. What's a basis for the
row space of that matrix? The first two rows. So a basis for the
row -- so a basis is, for the row space of A or of R,
is, is the first R rows of R. Not of A. Sometimes it's true for
A, but not necessarily. But R, we definitely have a
matrix here whose row space we can, we can identify. The row space is spanned
by the three rows, but if we want a basis
we want independence. So out goes row three. The row space is also spanned
by the first two rows. This guy didn't
contribute anything. And of course over here
this 1 2 3 1 in the bottom didn't contribute anything. We had it already. So this, here is a basis. 1 0 1 1 and 0 1 1 0. I believe those are
in the row space. I know they're independent. Why are they in the row space? Why are those two
vectors in the row space? Because all those
operations we did, which started with these rows
and took combinations of them -- I took this row minus this
row, that gave me something that's still in the row space. That's the point. When I took a row minus a
multiple of another row, I'm staying in the row space. The row space is not changing. My little basis
for it is changing, and I've ended up with,
sort of the best basis. If the columns of the identity
matrix are the best basis for R^3 or R^n, the rows of
this matrix are the best basis for the row space. Best in the sense of being
as clean as I can make it. Starting off with the
identity and then finishing up with whatever has
to be in there. OK. Do you see then that
the dimension is r? For sure, because we've got
r pivots, r non-zero rows. We've got the right
number of vectors, r. They're in the row space,
they're independent. That's it. They are a basis
for the row space. And we can even pin
that down further. How do I know that every
row of A is a combination? How do I know they
span the row space? Well, somebody says, I've
got the right number of them, so they must. But -- and that's true. But let me just say, how
do I know that this row is a combination of these? By just reversing the
steps of row reduction. If I just reverse the steps and
go from A -- from R back to A, then what do I, what I doing? I'm starting with
these rows, I'm taking combinations of them. After a couple of steps,
undoing the subtractions that I did before, I'm
back to these rows. So these rows are
combinations of those rows. Those rows are
combinations of those rows. The two row spaces are the same. The bases are the same. And the natural
basis is this guy. Is that all right
for the row space? The row space is
sitting there in R in its cleanest possible form. OK. Now what about the fourth guy,
the null space of A transpose? First of all, why do I call
that the left null space? So let me save that
and bring that down. OK. So the fourth space is the
null space of A transpose. So it has in it vectors,
let me call them y, so that A transpose y equals 0. If A transpose y
equals 0, then y is in the null space of
A transpose, of course. So this is a matrix times
a column equaling zero. And now, because I want
y to sit on the left and I want A instead
of A transpose, I'll just transpose
that equation. Can I just transpose that? On the right, it makes
the zero vector lie down. And on the left, it's a
product, A, A transpose times y. If I take the transpose, then
they come in opposite order, right? So that's y transpose times
A transpose transpose. But nobody's going to
leave it like that. That's -- A transpose
transpose is just A, of course. When I transposed A
transpose I got back to A. Now do you see what I have now? I have a row
vector, y transpose, multiplying A, and
multiplying from the left. That's why I call it
the left null space. But by making it --
putting it on the left, I had to make it into a row
instead of a column vector, and so my convention is
I usually don't do that. I usually stay with A
transpose y equals 0. OK. And you might ask, how do we
get a basis -- or I might ask, how do we get a basis
for this fourth space, this left null space? OK. I'll do it in the example. As always -- not that one. The left null space is not
jumping out at me here. I know which are the
free variables -- the special solutions, but those
are special solutions to A x equals zero, and now I'm
looking at A transpose, and I'm not seeing it here. So -- but somehow you feel that
the work that you did which simplified A to R should have
revealed the left null space too. And it's slightly less
immediate, but it's there. So from A to R, I
took some steps, and I guess I'm interested
in what were those steps, or what were all
of them together. I don't -- I'm not interested in
what particular ones they were. I'm interested in what
was the whole matrix that took me from A to R. How would you find that? Do you remember
Gauss-Jordan, where you tack on the identity matrix? Let's do that again. So I, I'll do it above, here. So this is now, this
is now the idea of -- I take the matrix
A, which is m by n. In Gauss-Jordan, when
we saw him before -- A was a square
invertible matrix and we were finding its inverse. Now the matrix isn't square. It's probably rectangular. But I'll still tack on the
identity matrix, and of course since these have length
m it better be m by m. And now I'll do the reduced row
echelon form of this matrix. And what do I get? The reduced row echelon form
starts with these columns, starts with the first columns,
works like mad, and produces R. Of course, still that
same size, m by n. And we did it before. And then whatever
it did to get R, something else is
going to show up here. Let me call it E, m by m. It's whatever -- do you see
that E is just going to contain a record of what we did? We did whatever it took
to get A to become R. And at the same time,
we were doing it to the identity matrix. So we started with the identity
matrix, we buzzed along. So we took some -- all this row reduction amounted
to multiplying on the left by some matrix, some series
of elementary matrices that altogether gave us one
matrix, and that matrix is E. So all this row reduction stuff
amounted to multiplying by E. How do I know that? It certainly amounted to
multiply it by something. And that something took I to
E, so that something was E. So now look at the
first part, E A is R. No big deal. All I've said is that the row
reduction steps that we all know -- well, taking A to
R, are in a, in some matrix, and I can find out what that
matrix is by just tacking I on and seeing what comes out. What comes out is E. Let's just review the
invertible square case. What happened then? Because I was interested
in it in chapter two also. When A was square and
invertible, I took A I, I did row, row elimination. And what was the
R that came out? It was I. So in chapter two, in
chapter two, R was I. The row the, the
reduced row echelon form of a nice invertible
square matrix is the identity. So if R was I in that case, then
E was -- then E was A inverse, because E A is I. Good. That's, that was good and easy. Now what I'm saying is
that there still is an E. It's not A inverse any more,
because A is rectangular. It hasn't got an inverse. But there is still some matrix
E that connected this to this -- oh, I should have figured
out in advanced what it was. Shoot. I didn't -- I did those steps and sort
of erased as I went -- and, I should have done
them to the identity too. Can I do that? Can I do that? I'll keep the identity matrix,
like I'm supposed to do, and I'll do the same operations
on it, and see what I end up with. OK. So I'm starting
with the identity -- which I'll write in light,
light enough, but -- OK. What did I do? I subtracted that row from that
one and that row from that one. OK, I'll do that
to the identity. So I subtract that first row
from row two and row three. Good. Then I think I multiplied -- Do you remember? I multiplied row
two by minus one. Let me just do that. Then what did I do? I subtracted two of row
two away from row one. I better do that. Subtract two of
this away from this. That's minus one, two of these
away leaves a plus 2 and 0. I believe that's E. The way to check is to see,
multiply that E by this A, just to see did I do it right. So I believe E was -1 2
0, 1 -1 0, and -1 0 1. OK. That's my E, that's
my A, and that's R. All right. All I'm struggling
to do is right, the reason I wanted this blasted
E was so that I could figure out the left null space,
not only its dimension, which I know -- actually, what is the dimension
of the left null space? So here's my matrix. What's the rank of the matrix? And the dimension of the null
-- of the left null space is supposed to be m-r. It's 3 -2, 1. I believe that the left null
space is one dimensional. There is one combination
of those three rows that produces the zero row. There's a basis -- a basis for
the left null space has only got one vector in it. And what is that vector? It's here in the last row of E. But we could have
seen it earlier. What combination of those
rows gives the zero row? -1 of that plus one of that. So a basis for the left
null space of this matrix -- I'm looking for combinations
of rows that give the zero row if I'm looking at
the left null space. For the null space, I'm looking
at combinations of columns to get the zero column. Now I'm looking at combinations
of these three rows to get the zero row, and of
course there is my zero row, and here is my vector
that produced it. -1 of that row and one of that row. Obvious. OK. So in that example -- and
actually in all examples, we have seen how to produce a
basis for the left null space. I won't ask you that
all the time, because -- it didn't come out
immediately from R. We had to keep track of E
for that left null space. But at least it didn't require
us to transpose the matrix and start all over again. OK, those are the
four subspaces. Can I review them? The row space and the
null space are in R^n. Their dimensions add to n. The column space and the
left null space are in R^m, and their dimensions add to m. OK. So let me close
these last minutes by pushing you a little bit more
to a new type of vector space. All our vector spaces, all the
ones that we took seriously, have been subspaces of some real
three or n dimensional space. Now I'm going to write
down another vector space, a new vector space. Say all three by three matrices. My matrices are the vectors. Is that all right? I'm just naming them. You can put quotes
around vectors. Every three by three matrix
is one of my vectors. Now how I entitled to
call those things vectors? I mean, they look very
much like matrices. But they are vectors in my
vector space because they obey the rules. All I'm supposed to be able to
do with vectors is add them -- I can add matrices -- I'm supposed to be able to
multiply them by scalar numbers like seven -- well, I can
multiply a matrix by And that -- and I can take
combinations of matrices, I can take three of one
matrix minus five of another matrix. And those combinations, there's
a zero matrix, the matrix that has all zeros in it. If I add that to another
matrix, it doesn't change it. All the good stuff. If I multiply a matrix by
one it doesn't change it. All those eight rules
for a vector space that we never wrote down,
all easily satisfied. So now we have a different -- now of course you can say you
can multiply those matrices. I don't care. For the moment, I'm only
thinking of these matrices as forming a vector space --
so I only doing A plus B and c times A. I'm not interested
in A B for now. The fact that I
can multiply is not relevant to th-
to a vector space. OK. So I have three
by three matrices. And how about subspaces? What's -- tell me a subspace
of this matrix space. Let me call this matrix space M. That's my matrix space, my space
of all three by three matrices. Tell me a subspace of it. What about the upper
triangular matrices? OK. So subspaces. Subspaces of M. All, all upper
triangular matrices. Another subspace. All symmetric matrices. The intersection
of two subspaces is supposed to be a subspace. We gave a little effort
to the proof of that fact. If I look at the matrices
that are in this subspace -- they're symmetric, and
they're also in this subspace, they're upper triangular,
what do they look like? Well, if they're
symmetric but they have zeros below the
diagonal, they better have zeros above the
diagonal, so the intersection would be diagonal matrices. That's another subspace,
smaller than those. How can I use the word smaller? Well, I'm now entitled
to use the word smaller. I mean, well, one way
to say is, OK, these are contained in those. These are contained in those. But more precisely, I could give
the dimension of these spaces. So I could -- we can compute
-- let's compute it next time, the dimension of all upper
-- of the subspace of upper triangular three
by three matrices. The dimension of symmetric
three by three matrices. The dimension of diagonal
three by three matrices. Well, to produce
dimension, that means I'm supposed to produce
a basis, and then I just count how many vecto-
how many I needed in the basis. Let me give you the
answer for this one. What's the dimension? The dimension of this
-- say, this subspace, let me call it D, all
diagonal matrices. The dimension of
this subspace is -- as I write you're
working it out -- three. Because here's a matrix in
this -- it's a diagonal matrix. Here's another one. Here's another one. Better make it diagonal,
let me put a seven there. That was not a
very great choice, but it's three
diagonal matrices, and I believe that
they're a basis. I believe that those three
matrices are independent and I believe that
any diagonal matrix is a combination of those three. So they span the subspace
of diagonal matrices. Do you see that idea? It's like stretching the
idea from R^n to R^(n by n), three by three. But the -- we can still add, we
can still multiply by numbers, and we just ignore the fact that
we can multiply two matrices together. OK, thank you. That's lecture ten.
|
bBVDSUZDCWY
|
last hour we discussed independence and conditional independence and we said that two events that are independent condition dance two events a and B condition than another event C may or may not remain independent when the conditioning is lifted okay this next example shows that conditioning may affect independence okay the example is assume and B are independent but a intersection B intersection another event C is empty so if we're told that the event C occurred then R and be independent now okay drawing a Venn diagram can be a very easy way of exhibiting a counter example showing that a and B may not be independent so here's the here's a counter example events a and B let this be given a let this be Avance be they have a non-empty intersection okay they had an empty intersection that we would know that they are dependents rights this joints means dependents so assume so suppose suppose a and B are independent let this the event see me like this this PD van see okay she has a non empty intersection with both a and B but not with the intersection of a and B okay so let's compute the probability of a intersection B given C what is that this is zero why because if C is given now C is the new universe in this view new universe a and B cannot both happen at the same time okay so this is different from the probability of a given C times probability of B given C these are different from zero please whenever the both of these are different from zero the left hand side is not equal to the right hand side so this is a counter example showing that events that are independent may not remain independent when conditioned on something else okay really questions okay so let's now do an example to illustrate conditional independence more okay let's take two unfair coins and beep unfair coins meaning the probability of heads and tails are not one-half in fact if you pick coin a the probability of heads is 0.9 on the other hand if you pick coin B the probability of heads is zero point one so coin a is more likely to produce heads and coin B is more likely to produce tails okay and but once we pick coin a if we keep tossing it each toss is independent of each other and each toss results in a head with probability 0.9 independently of all others that's the question setup now here is a randomized algorithm for coin tossing I choose either coin with equal probability so I first thought you know toss a fair coin the pending on the result of the fair coin toss I pick either coin a or a coin B okay and I continue the rest of the experiment with that coin that I picked okay question is once you know it is coin a our future tosses independence of each other yes in fact by definition of the problem once you pick coin a each toss is in the pan once you know it is coin a each toss independently results in heads with probability 0.9 there interesting question if you don't know which coin it is our future tosses independent of each other let's think maybe we should make that question a little more precise so to make things precise I would like to compute the probabilities first let's compute the probability that the fifth toss is eight tails okay the probability that the fifth toss is tails I don't know which coin I picked remember I start the experiment by throwing by tossing a coin fair coin or with equally likely I'm equally likely to pick coin a or coin B if I pick coin a then I get heads or tails the probability of heads is 0.9 the probability of tails is 0.1 in further trials are going independently with the same probabilities if I pick coin B on the other hand heads has probability zero point one and tails our prodigy 0:49 etc so let's compute the probability in general without conditioning on which coin I picked what is the probability that the fifth toss of this fifth toss of the coin and this experiment is text does it matter whether I can sue the fifth or the sixth order first no how can i compute this I can apply the total probability theorem remember we talked about when we first talked about the total probability theorem I told you that it would be very useful in in in the future so here is an example of where it becomes very useful so I can break this into an event and it's complement let events be P Koine okay this is the probability that the tips to us is tails given I picked coin a times the probability of picking coin a plus the probability that the fifth toss is tails given I picked coin B times the probability of picking coin B because coin egg and coin B are the only two possibilities so they are the confidence of each other so let's compute this what is this probability given Koine its tail's with probability 0.1 0.1 x what's the probability of picking coin a 1/2 + v toss tails given coin be 0.9 times 1/2 so what do I get surprise 0.5 I had two unfair coins I converted them to one fair coin it's like magic right randomized algorithms work like magic okay this is the randomized algorithm choose either coin with equal probability okay this random strategy or policy creates a fair coin out of two unfair coins okay except it's not that simple if you can observe the past tosses of the coin if you can observe the results the outcomes of the past tosses you start making an inference about the future in fact you you you infer something about which you can infer something about which coin you have and then in the rest all the time you could use the probabilities for that coin so here's what I mean now let's compute the probability that the fifth toss is tails given that the first four tosses all resulted in tails so what do you think first of all before it sounding attempting the problem what would you think the answer is approximately if all first four tosses resulted in tails then maybe it's quite likely that you have coin B in your hands right in that case the fifth ball toss is quite likely to result in a tail - and it's computers make it precise the probability that the fifth is tails given that all first for our tails are is the probability that the fifth is tails and all first for our tails remember I'm using the definition of conditional probability divided by the probability that all first for our tails now what is this event the first for our tails and the fifth is tails this is all first five tosses our tails now again use total probability theorem probability five tails I'm going to shorten this five tails given coin a times one-half the probability of picking coin a plus the probability of getting all five tails given I have coin B times the probability of having coin B divided by the probability of the first for all being T okay so the probability of 40s given point a times one-half plus the probability of 40s given point B times one-half what is the result five tails given coin in hmm if I have point a then at each toss independently I get tails with probability zero point one huh yes so zero point one raised to the power 5 is the probability of getting five heads in a row times one half plus if I have coin B each toss results in the head tails with probability zero point nine five tails in the row is 0.9 raised to the power five times 1/2 divided by do the same in the denominator this time for four tails in a row zero point five to the fourth times 1/2 plus zero point nine to the four times one half okay no one halves cancel and well these terms are much smaller than these terms so it's not hard to show that the result is around zero point nine of course it's a little less than zero point nine that's very close to zero point nine so since the coin the two coins have their different probabilities coin in particular is very unlikely to result in tails if you have observed four tails in a row this makes it highly unlikely that you have coin a and highly probable that you have coin B which makes conditional probability of the fifth toss be tails close to 0.9 in fact the more tails you observe you know the closer to 0.9 gets and eventually it converges to 0.9 any questions so basically going back to where we came from if we don't know which coin is given to us the past carries information the past outcomes carry information they carry information about which coin it may be okay so successive tosses are not independent of each other condition unconditionally okay but conditionally they are independent right let's write this again probability that the gift so profit of getting five sales in a row given that I got four tails in a row and I have coin a is what if I have Queenie if I know that I have coin a the past observations don't matter anymore because coin a has a fixed probability of giving me tails and that is zero point one but as we compute it over here unconditionally the probability of getting five tails in a row given that I got four tails in a row is approximately zero point nine okay so okay any questions yes please tosses are independent of each other when we condition on a particular coin because we assumed that when you you know toss coin a every toss result in heads with probe 20.9 in the penalty of other tosses okay so when we are given that it's coin a nothing can change this probability anymore that probability is fixed okay well we're not given that it's coin a then this carries information so this is an example showing that things that are conditionally independent may become dependent when the conditioning is removed okay so think about this example if it confuses you at this point I think this is a pretty good example this also is very related to information theory and estimation okay so these ideas are you know these are ideas that lie underneath a lot of interesting theories okay let's move on we define independence for two events okay we can also define independence for more than two events independence of the collection of events means that information on some of the events a group of the events tells us nothing about the others okay so precisely speaking suppose events say I a 1 a 2 a 3 up to a an our independence this is true if and only if the probability of the intersection of a is all on any subset s of the set of indices through an breaks up into a product P of AI P of a V I I again I'm taking now is in the set s okay so in particular if s is too then there's only one subset with more than one element that is one to itself so the a a 1 and a 2 are independent if and only if the probability of a1 intersection a2 is probability of a1 times for each of a two let's think about it for three sets when for three sets and is three how many conditions do we need to check let's see the set 1 2 3 how many subsets does it have 8 subsets but one of the subsets is empty that makes seven three of the subsets are size 1 only that leaves four we need to check for conditions and equals 3 a1 a2 a3 independence as a group if and only if the probability of a1 intersection a2 is P of a 1 times P of a 2 and P of a 1 intersection a3 is P of a 1 times P of a 3 and P of a 2 intersection a3 is P of a 2 times P of a 3 1 mark what is that P of a 1 intersection a 2 intersection a 3 heel a 1 intersection a 2 intersection a three is P of a 1 times P of a 2 times P of a 3 so these are the four conditions that need to be satisfied for the three sets a 1 a 2 a 3 to be in the pan if only these three are satisfied these imply that the state sets are pairwise independent these are the conditions for pairwise independence okay but for independence as a group with all four conditions to be satisfied so for n equals 4 and equals 4 how many conditions excuse me how many subsets with two or more elements 16 minus 4 minus 1 right which is 11 so in general in general to the N to the N minus an minus 1 conditions need to be checked and satisfied for n sets to be independent as a group ok so remember for two sets we said a and B are in the panel if and only if and we complement our independence now for general and the group a1 through a on our independence if any combination of these sets is independent of any other combination in particular a 5 Union a 2 intersection a 1 Union 8 for compliments okay given a 3 Union a 6 complements must you know as a consequence of these conditions it can be shown that this conditioning drops why because this this is independent of this because everything every individual set is independent of every other individual set so basically since they're all independent things by combining them you cannot create you know by combining a group of them you cannot create an independence with another group unless of course there are sets in common between the two groups okay so as an exercise I recommend that you show that this holds so sure this has exercise okay pairwise independence which means satisfying these conditions on a pairwise does not imply independence okay so if you want to confirm independence checking P AI intersection AJ equals P AI times P AJ for all IJ is not sufficient to confirm independence for three events again checking that the probability of the intersection of all three breaking up into this product is not enough for confirming independence okay so let's do this example consider two independent tosses of a fair coin a is the event that the first toss is heads B is the event that the second toss is heads and C is the event that the first and second toss have the same outcomes okay are these events pairwise independence so first toss resulting in heads and first on a second resulting in the same outcome first of all a and C are the independent let's check let's first write down the probabilities P of a is first toss is heads 1/2 pob is 1/2 P of C I toss two coins and I want them to have the same outcome either heads heads or tails tails what's the probability 1/2 ok now compute probability of a intersection B what's that that's the probability of heads heads right which is 1 over 4 because the two are independent and B are independent okay why because we are considering two independent tosses of a fair coin okay peel they times P of B okay what's next let's try a and see this is the probability that the first is heads and the first and second are the same which is again 1 over 4 well this is equal to P of a times P of C okay so a and C are independent a and B are independent similarly B and C are the probability of well again heads heads which is 1 over 4 which is P of B times P of C in the panels okay so all three events are pairwise independent are they independent we need to check this last condition to see if they're independent a and B on C the first toss is heads the second toss is heads and two tosses are equal well that is again the probability of the event HH which is one over four which is not equal to P of a times P of B times P of C which is 1 over 8 okay so this not being satisfied makes us conclude that a B and C are not independent as a group okay so what's happening is just by knowing a or just by knowing me I don't have full information about C and in fact it turns out and see our independent B and C are independent but you found if a and B are both given then I know for sure whether C happens or not so any questions about this okay here's another one consider again two independent tosses of a fair coin oh it's the same example sorry let's compute how we computed these things P of C yes one half if you have C intersection a one over four C intersection a intersection B 1 over 4 C given B intersection a yes and that's sure and that's being different from P of C shows that C is not independent of a and P so here's another quite classic example about network connectivity as electrical engineers I'm sure this will seem pretty straightforward to you okay here's the example in the electrical network in this figure each circuit elements so these rectangles show individual circuit elements okay each circuit element is on with probability P independently of all others okay now there's a connection between points a and B if and only if I can find that path from A to B a uninterrupted path on which you know I pass through only elements that are on okay what is the probability that there is a connection between points a and T when the elements act randomly like this so remember each element is on with probability P independently of all other elements what is the probability that the network is connected that is points a and B are connected how would you approach this so let me maybe repeat roughly sketched figure over here okay there's a two elements here why LM here a parallel connection here and another element here and Chu elements in parallel here okay let's call this point C okay so a and B connected if and only it if and only if a and C are connected and C and B are connected right okay so a and B must be connected and B and C must be connected are these two events independence yes because this amount H a and C being connect depends on these elements let's number them 1 2 3 4 5 6 CB be connected depends on these elements only 7 & 8 7 & 8 are independent of 1 2 3 4 5 6 okay as a group these eight elements are independent of each other so this subset is independent of this substance okay so this must be connected and this must be connected what is the probability that C V is connected probability that C B is correct connected well either element 7 or element 8 because they are connected in parallel what is the probability that either 7 or 8 is on it is one minus the probability that both of them are off right one minus one minus P squared all right one minus P squared is the probability that seven and eight are both off okay so one minus that is the probability of CP being connected so we got this part done let's do the first part probability of AC yeah AC is connected if and only if at least one of these three branches works okay so the probably that it is connected is one minus the probability that none of the three branches works do you agree okay the the connection between a and C is off if and only if all these branches are off okay now but the branches are independence right yes because the branches have independent elements they don't depend on each other so this is this this probability is the product of top branch top branches off times the probability that the middle branch is off times the probability that the bottom branch is off what is the probability that the top branch is off right for the top branch to work both of these elements have to work because they're series connected okay so if either of them don't work then they also 1 minus P squared because P squared is the probability that that branch is on what is the probability that the middle branch is off that's easy 1 minus P what is the probability that the bottom branch is off its off if either this doesn't work or one of these doesn't work or again let's think negatively again it's off if it's not on so 1 minus the probability that it's on when is it on it's on first of all element 6 has to be operational which happens with probability P and 4 and 5 don't have to be both off at the same time which is 1 minus 1 minus P squared so put all these together 1 minus 1 minus P squared times 1 minus P times 1 minus P times 1 minus 1 minus P squared ok and multiply this with this okay any questions let's see so what are the two ideas that we used here first of all we use the fact that when a group of elements are all independent this means the Associated events of each element P on are independent as a group okay so because of this take any subset subset like 7 & 8 7 & 8 is independent of 1 & 2 any any combination of 7 & 8 is independent of any combination of 1 2 3 4 5 6 etc furthermore any combination of for example going on - here they are series connected and 4 & 5 are in parallel in series with 6 no matter what you do with 4 5 & 6 you cannot create independence with 3 for example okay so this makes life very easy we can compute the probability that this branch works please work this out example again on your own and make sure it's easy for you to make sure you know computing the probability of an event happening by subtracting from 1 the probability of the complement is you know becomes habitual for you - because sometimes they're useful next topic is independent trials but it's a better idea to start that topic at the beginning of next time so we'll have a full hour to devote to these topics I'll see you next time
|
ZK3O402wf1c
|
Hi.
|
ZK3O402wf1c
|
This is the first lecture
in MIT's course 18.06,
|
ZK3O402wf1c
|
linear algebra, and
I'm Gilbert Strang.
|
ZK3O402wf1c
|
The text for the
course is this book,
|
ZK3O402wf1c
|
Introduction to Linear Algebra.
|
ZK3O402wf1c
|
And the course web page, which
has got a lot of exercises from
|
ZK3O402wf1c
|
the past, MatLab codes, the
syllabus for the course,
|
ZK3O402wf1c
|
is web.mit.edu/18.06.
|
ZK3O402wf1c
|
And this is the first
lecture, lecture one.
|
ZK3O402wf1c
|
So, and later we'll give the
web address for viewing these,
|
ZK3O402wf1c
|
videotapes.
|
ZK3O402wf1c
|
Okay, so what's in
the first lecture?
|
ZK3O402wf1c
|
This is my plan.
|
ZK3O402wf1c
|
The fundamental problem
of linear algebra,
|
ZK3O402wf1c
|
which is to solve a system
of linear equations.
|
ZK3O402wf1c
|
So let's start
with a case when we
|
ZK3O402wf1c
|
have some number of equations,
say n equations and n unknowns.
|
ZK3O402wf1c
|
So an equal number of
equations and unknowns.
|
ZK3O402wf1c
|
That's the normal, nice case.
|
ZK3O402wf1c
|
And what I want to do is --
with examples, of course --
|
ZK3O402wf1c
|
to describe, first, what
I call the Row picture.
|
ZK3O402wf1c
|
That's the picture of
one equation at a time.
|
ZK3O402wf1c
|
It's the picture you've
seen before in two
|
ZK3O402wf1c
|
by two equations
where lines meet.
|
ZK3O402wf1c
|
So in a minute, you'll
see lines meeting.
|
ZK3O402wf1c
|
The second picture,
I'll put a star
|
ZK3O402wf1c
|
beside that, because that's
such an important one.
|
ZK3O402wf1c
|
And maybe new to you is the
picture -- a column at a time.
|
ZK3O402wf1c
|
And those are the rows
and columns of a matrix.
|
ZK3O402wf1c
|
So the third -- the algebra
way to look at the problem is
|
ZK3O402wf1c
|
the matrix form and using
a matrix that I'll call A.
|
ZK3O402wf1c
|
Okay, so can I do an example?
|
ZK3O402wf1c
|
The whole semester will be
examples and then see what's
|
ZK3O402wf1c
|
going on with the example.
|
ZK3O402wf1c
|
So, take an example.
|
ZK3O402wf1c
|
Two equations, two unknowns.
|
ZK3O402wf1c
|
So let me take 2x
-y =0, let's say.
|
ZK3O402wf1c
|
And -x 2y=3.
|
ZK3O402wf1c
|
Okay.
|
ZK3O402wf1c
|
let me -- I can even
say right away --
|
ZK3O402wf1c
|
what's the matrix, that is,
what's the coefficient matrix?
|
ZK3O402wf1c
|
The matrix that involves
these numbers --
|
ZK3O402wf1c
|
a matrix is just a
rectangular array of numbers.
|
ZK3O402wf1c
|
Here it's two rows and
two columns, so 2 and --
|
ZK3O402wf1c
|
minus 1 in the first row minus
1 and 2 in the second row,
|
ZK3O402wf1c
|
that's the matrix.
|
ZK3O402wf1c
|
And the right-hand
-- the, unknown --
|
ZK3O402wf1c
|
well, we've got two unknowns.
|
ZK3O402wf1c
|
So we've got a vector, with
two components, x and y,
|
ZK3O402wf1c
|
and we've got two right-hand
sides that go into a vector
|
ZK3O402wf1c
|
0 3.
|
ZK3O402wf1c
|
I couldn't resist writing
the matrix form, right --
|
ZK3O402wf1c
|
even before the pictures.
|
ZK3O402wf1c
|
So I always will think
of this as the matrix A,
|
ZK3O402wf1c
|
the matrix of coefficients,
then there's a vector of
|
ZK3O402wf1c
|
unknowns.
|
ZK3O402wf1c
|
Here we've only
got two unknowns.
|
ZK3O402wf1c
|
Later we'll have any
number of unknowns.
|
ZK3O402wf1c
|
And that vector of
unknowns, well I'll often --
|
ZK3O402wf1c
|
I'll make that x --
|
ZK3O402wf1c
|
extra bold.
|
ZK3O402wf1c
|
A and the right-hand
side is also a vector
|
ZK3O402wf1c
|
that I'll always call b.
|
ZK3O402wf1c
|
So linear equations are A
x equal b and the idea now
|
ZK3O402wf1c
|
is to solve this
particular example
|
ZK3O402wf1c
|
and then step back to
see the bigger picture.
|
ZK3O402wf1c
|
Okay, what's the picture for
this example, the Row picture?
|
ZK3O402wf1c
|
Okay, so here comes
the Row picture.
|
ZK3O402wf1c
|
So that means I take
one row at a time
|
ZK3O402wf1c
|
and I'm drawing
here the xy plane
|
ZK3O402wf1c
|
and I'm going to plot
all the points that
|
ZK3O402wf1c
|
satisfy that first equation.
|
ZK3O402wf1c
|
So I'm looking at all the
points that satisfy 2x-y =0.
|
ZK3O402wf1c
|
It's often good to start with
which point on the horizontal
|
ZK3O402wf1c
|
line -- on this horizontal
line, y is zero.
|
ZK3O402wf1c
|
The x axis has y as zero and
that -- in this case, actually,
|
ZK3O402wf1c
|
then x is zero.
|
ZK3O402wf1c
|
So the point, the origin --
|
ZK3O402wf1c
|
the point with coordinates
(0,0) is on the line.
|
ZK3O402wf1c
|
It solves that equation.
|
ZK3O402wf1c
|
Okay, tell me in -- well,
I guess I have to tell you
|
ZK3O402wf1c
|
another point that solves
this same equation.
|
ZK3O402wf1c
|
Let me suppose x is one,
so I'll take x to be one.
|
ZK3O402wf1c
|
Then y should be two, right?
|
ZK3O402wf1c
|
So there's the point one two
that also solves this equation.
|
ZK3O402wf1c
|
And I could put in more points.
|
ZK3O402wf1c
|
But, but let me put
in all the points
|
ZK3O402wf1c
|
at once, because they all
lie on a straight line.
|
ZK3O402wf1c
|
This is a linear equation
and that word linear
|
ZK3O402wf1c
|
got the letters
|
ZK3O402wf1c
|
Okay, thanks. for line in it.
|
ZK3O402wf1c
|
That's the equation --
this is the line that ...
|
ZK3O402wf1c
|
of solutions to 2x-y=0 my
first row, first equation.
|
ZK3O402wf1c
|
So typically, maybe, x equal
a half, y equal one will work.
|
ZK3O402wf1c
|
And sure enough it does.
|
ZK3O402wf1c
|
Okay, that's the first one.
|
ZK3O402wf1c
|
Now the second one is not
going to go through the origin.
|
ZK3O402wf1c
|
It's always important.
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1